How Bitcoin Could Save Journalism and the Arts

The Innovators

Walter Isaacson is the author of “The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution.” Isaacson, the CEO of the Aspen Institute, has also been chairman of CNN and the managing editor of Time magazine.

Micropayment systems have the potential to reward creativity and exceptional content—on a realistic scale

The rise of Bitcoin, the digital cryptocurrency, has resurrected the hope of facilitating easy micropayments for content online. “Using Bitcoin micropayments to allow for payment of a penny or a few cents to read articles on websites enables reasonable compensation of authors without depending totally on the advertising model,” writes Sandy Ressler in Bitcoin Magazine.

This could lead to a whole new era of creativity, just like the economy that was launched 400 years ago by the Statute of Anne, which gave people who wrote books, plays or songs the right to make a royalty when they were copied. An easy micropayment system would permit today’s content creators, from major media companies to basement bloggers, to be able to sell digital copies of their articles, songs, games, and art by the piece. In addition to allowing them to pay the rent, it would have the worthy benefit of encouraging people to produce content valued by users rather than merely seek to aggregate eyeballs for advertisers.

This is something I advocated in a 2009 cover story for Time about ways to save journalism. “The key to attracting online revenue, I think, is to come up with an iTunes-easy method of micropayment,” I wrote. “We need something like digital coins or an E-ZPass digital wallet–a one-click system with a really simple interface that will permit impulse purchases of a newspaper, magazine, article, blog or video for a penny, nickel, dime or whatever the creator chooses to charge.”

TIME, February 16, 2009

That was not technically feasible back then. But Bitcoin has now spawned services such as ChangeTip, BitWall, BitPay and Coinbase that enable small payments to be made simply, with minimal mental friction or transaction costs. Unlike clunky PayPal, impulse purchases can be made without a pause or leaving a trace.

When reporting my new book, The Innovators, I discovered that most pioneers of the Web believed in enabling micropayments. In the mid-1960s, Ted Nelson coined the term hypertext and envisioned a web with two-way links, which would require the approval of the person whose page was being linked to.

Had Nelson’s system prevailed, it would have been possible for small payments to accrue to those who produced the content. The entire business of journalism and blogging would have turned out differently. Instead the Web became a realm where aggregators could make more money than content producers.

Tim Berners-Lee, the English computer engineer who created the protocols of the Web in the early 1990s, considered including some form of rights management and payments. But he realized that would have required central coordination and made it hard for the Web to spread wildly. So he rejected the idea.

As the Web was taking off in 1994, I was the editor of new media for Time Inc. Initially we were paid by the dial-up online services, such as AOL and Compuserve, to supply content, market their services, and moderate bulletin boards that built up communities of members.

When the open Internet became an alternative to these proprietary online services, it seemed to offer an opportunity to take control of our own destiny and subscribers. Initially we planned to charge a small fee or subscription, but ad agencies were so enthralled by the new medium that they flocked to buy the banner ads we had developed for our sites. Thus we decided to make our content free and build audiences for advertisers.

It turned out not to be a sustainable business model. It was also not healthy; it encouraged clickbait rather than stories that were so valuable that readers would pay for them. Consumers were conditioned to believe that content should be free. It took two decades to put that genie back in the bottle.

In the late 1990s, Berners-Lee tried to create new Web protocols that could embed on a page the information needed to handle a small payment, which would allow electronic wallet services to be created by banks or entrepreneurs. It was never implemented, partly because of the complexity of banking regulations. He revived the effort in 2013. “We are looking at micropayment protocols again,” he said. “The ability to pay for a good article or song could support more people who write things or make music.”

These micropayment protocols still have not been written. But Bitcoin may be making that unnecessary. One of the greatest advocates of using Bitcoin for micropayments is the venture capitalist Marc Andreessen, who as a student at the University of Illinois in 1993 created the first popular Web browser, Mosaic.

Originally, Andreessen had hoped to put a digital currency into his browser. “When we started, the first thing we tried to do was enable small payments to people who posted content,” he explained. “But we didn’t have the resources to implement that. The credit card systems and banking system made it impossible. It was so painful to deal with those guys. It was cosmically painful.”

Now Andreessen has become a major investor in companies that are creating Bitcoin transaction systems. “If I had a time machine and could go back to 1993, one thing I’d do for sure would be to build in Bitcoin or some similar form of cryptocurrency.”

Walter Isaacson, a former managing editor of Time, is the author of The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution, out this week.

TIME Ideas hosts the world's leading voices, providing commentary and expertise on the most compelling events in news, society, and culture. We welcome outside contributions. To submit a piece, email ideas@time.com.

TIME Apple

Apple’s Watch Will Make People and Computers More Intimate

The new device will bring us one step closer to human-machine symbiosis

A fundamental quest of the digital age has been to make our devices more personal. Steve Jobs was the Zen master of this, and he ingrained it into the DNA of Apple. That was reflected in the Apple Watch that current Apple CEO Tim Cook and his team launched this week, the latest leap toward creating a more intimate connection between people and computers.

The great pioneer of computer personalization was Vannevar Bush, an MIT engineering dean who oversaw scientific research for the U.S. government during World War II. In 1945 he wrote a seminal article titled “As We May Think” for the Atlantic that envisioned a personal information device that he called a memex. A person would be able to store all of his communications and information in it, and it would serve as “an enlarged intimate supplement to his memory.” The word intimate was key, and it was one that Cook used when describing the Apple Watch.

Other ingenious innovators enhanced the intimacy between computers and humans. J. C. R. Licklider, an MIT psychologist and engineer who best deserves the title of father of the Internet, helped design a massive U.S. air defense system that involved networked computers in twenty-three tracking centers. He created easy and intuitive graphic displays, since the nation’s fate might depend on the ability of a console jockey to assess data correctly and respond instantly. He called his approach “man-computer symbiosis.” As he explained, “human brains and computing machines will be coupled together very tightly.” Douglas Engelbart, an acolyte of Bush and Licklider, invented the mouse as part of his mission to make the connection between humans and computers more personal, and at Xerox PARC, Alan Kay and others came up with friendly screen displays with folders and icons that users could point to and click.

For the Macintosh that he launched at the Flint Center thirty years ago, Jobs famously appropriated the graphical user interface from Xerox PARC, quoting Picasso as saying that “great artists steal.” He had an intuitive genius for making devices that established an intimate connection with the user. The iPod, for example, performed the simple but magical task of putting a thousand songs in your pocket. It harkened back to another great triumph of personalization. In 1954, Pat Haggerty of Texas Instruments was looking for a way to create a mass market for transistors. He came up with the idea of a pocket radio. The radio no longer would be a living-room appliance to be shared; it became a personal device that allowed you to listen to your own music where and when you wished—even if it was music that your parents wanted to ban.

Indeed, there was a symbiotic relationship between the advent of the transistor radio and the rise of rock and roll. The rebellious new music made every kid want a radio. And the fact that the radios could be taken to the beach or the basement, away from the disapproving ears and dial-controlling fingers of parents, allowed the music to flourish. Its plastic case came, iPod-like, in four colors: black, ivory, Mandarin Red, and Cloud Gray. Within a year, 100,000 had been sold, making it one of the most popular new products in history.

In the decades since Bush envisioned the intimate and personal memex, a competing school of computer science has set its sights on artificial intelligence, repeatedly predicting the arrival of machines that could think without us, perhaps even make us irrelevant. That goal has been elusive, a mirage always a few decades away. The Apple Watch, designed to touch our wrists and beat with our hearts, again shows the greater power of the approach that Bush and Licklider proposed, that of seeking an intimate symbiosis and deeply personal partnership between humans and machines.

Walter Isaacson’s history of the digital age, The Innovators, will be published in October.


Henry Kissinger Reminds Us Why Realism Matters

Marco Grob for TIME The former Secretary of State, now 91, argues for a moral but rational foreign policy in the age of terrorism

In his new book, the 91-year-old statesman strikes a note of humility

When Henry Kissinger talks about world order, to some it might seem as if he is living in a previous century. The 17th, perhaps. Beginning with his Harvard doctoral dissertation 60 years ago, he has extolled the concept of international order that was established in 1648 by the Peace of Westphalia, which ended the Thirty Years’ War. Instead of being shaped by wars of religion and the death spasms of the Holy Roman Empire, Europe’s international system was thenceforth based on independent nation-states, each sovereign over religion and other issues in its own territory. States would not interfere in the internal affairs of other states, and order would, ideally, be maintained by clever statesmen who focused on national interests and curated a balance of power.

Kissinger’s appreciation for order, he later recalled, came after his family fled Hitler’s Germany in 1938 and arrived in New York, where he realized he did not have to cross the street to avoid non-Jewish boys who might beat him up. Kissinger became an exemplar of the realist, as opposed to idealist, school of diplomacy, someone who believed that a foreign policy that is overly guided by moral impulses and crusading ideals was likely to be dangerous. “The most fundamental problem of politics,” he wrote in his dissertation, “is not the control of wickedness but the limitation of righteousness.”

Kissinger’s fellow students in Harvard’s government department scoffed at his choice of topic. The atom bomb, they contended, had fundamentally changed global affairs. One snidely suggested he should transfer to the history department.

Likewise, we are tempted to raise an eyebrow at the news that Kissinger, now 91, has produced another paean to the Westphalian system, his 17th book in 60 years, this one titled simply World Order. Respect for sovereignty? How quaint! Hasn’t he heard that in the 21st century, threats respect no borders, the world is flat, and we have a humanitarian duty to protect people in places where regimes are repressive? That is why we rejected realist thinking for a “Freedom Agenda” that included invading Iraq to make the Middle East safe for democracy, toppling Muammar Gaddafi in Libya under a humanitarian banner and seeking (well, at least until ISIS came along) to do the same to President Bashar Assad in Syria.

Hmmm…upon reflection, maybe throwing out the Westphalian system, forsaking the principle of respect for sovereignty and letting idealism overwhelm ­realism wasn’t such a good idea after all. And if that’s the case, then Kissinger’s World Order doesn’t seem dated at all. The U.S. might do well to heed his prescription that it alloy its idealism with a new dose of realism. “Westphalian principles are, at this writing, the sole generally recognized basis of what exists of a world order,” he notes.

Kissinger’s book takes us on a dazzling and instructive global tour of the quest for order, from Cardinal Richelieu to Metternich and Bismarck, the Indian minister Kautilya of the 4th century B.C. and the Ottoman Sultan Suleiman, and a succession of American Presidents beginning with Teddy Roosevelt and Woodrow Wilson, all culminating in a world order based on sovereign nation-states at the end of World War II. “By the mid-20th century,” Kissinger writes, “this international ­system was in place on every continent.”

When he was the co-pilot of American statecraft as Richard Nixon’s National Security Adviser and Secretary of State in the early 1970s, Kissinger was able to manipulate the levers of this system with a mastery that would have mesmerized Metternich. Eschewing our differences in ideologies and values, he forged a détente with the Soviet Union and an opening to China, then played off both to create a triangular balance of power that preserved the U.S.’s influence after its retreat from Vietnam.

But sustaining such a values-neutral pursuit of strategic interests is difficult in a democracy that celebrates its moral exceptionalism. “The United States has alternated between defending the Westphalian system and castigating its premises of balance of power and noninterference in domestic affairs as immoral and outmoded,” he writes. Because he and Nixon failed to weave in the idealism that is ingrained in the American DNA, popular support for their realist edifice was precarious, as if built of bricks without straw. Kissinger was attacked by moral idealists of the left and, more notably, by the nascent neoconservatives and ardent anticommunists on the right. Reaction against his realism contributed to the elections of both Jimmy Carter and then Ronald Reagan.

Although Kissinger routinely notes the importance of America’s idealism, he almost invariably follows with the word but. “America would not be true to itself if it abandoned this essential idealism,” he writes. “But to be effective, these aspirational aspects of policy must be paired with an unsentimental analysis of underlying factors.” This “yes, but” balance, with the emphasis always tilting to the but sentence, pervades Kissinger’s analysis and peppers every chapter of his book.

The need for a renewed realism, Kissinger convincingly argues, is especially true in the Middle East, where jihadists have shattered the nation-state system in their quests for global revolution based on extremist religious values. This dangerous upheaval was facilitated in part by the U.S.’s morally motivated but strategically dubious decisions to support regime change and Western-style democracy in Iraq, Libya, Egypt, Afghanistan and Syria.

On Afghanistan, Kissinger supported the initial attack on al-Qaeda and its Taliban protectors, but he looks back skeptically on the broader mission that had evolved by 2003. “The central premise of the American and allied effort became ‘rebuilding Afghanistan’ by means of a democratic, pluralistic, transparent Afghan government whose writ ran across the entire country,” he writes. But this “radical reinvention of Afghan history” was not achievable. “No institutions in the history of Afghanistan or of any part of it provided a precedent for such a broad-based effort.”

Likewise on Iraq, Kissinger initially supported the mission to topple Saddam Hussein, but he says, “I had doubts, expressed in public and governmental forums, about expanding it to nation building and giving it such universal scope.” He blames George W. Bush and his Administration for pursuing idealistic crusades that ignored earthly realities. As Bush put it in a 2003 address, “Iraqi democracy will succeed—and that success will send forth the news, from Damascus to Tehran, that freedom can be the future of every nation.” This ideal was, Kissinger notes, unmoored from realities. “To seek to achieve [American values] by military occupation in a part of the world where they had no historical roots,” he writes, “imbued the American endeavor in Iraq with a Sisyphean quality.”

Despite heart surgery this year, Kissinger at 91 is a lion in a prolonged winter. Four decades after he last served in government, he is a fixture on the New York–Washington Acela, and he makes regular trips to Russia and China, where he is still accorded meetings with top leaders. His analyses remain prescient. Just as the showdown over chemical weapons in Syria was building last year, Kissinger was at a New York City dinner where various military and intelligence experts were discussing what might happen. Kissinger predicted that Russia would suddenly step in and offer a way to resolve the chemical-weapons issue, since it and the U.S. shared a strategic interest in not having such weapons fall into terrorist hands. Two weeks later, that is precisely what happened. He also argued that it was a mistake to make the ouster of President Assad’s regime a policy objective without knowing what would replace it, because that was likely to lead to a chaotic civil war dominated by the most radical of the jihadist forces.

For his undergraduate thesis in 1950, Kissinger tackled “The Meaning of History.” At 383 pages, it attempted to tie together the philosophies of Immanuel Kant, Oswald Spengler and Arnold Toynbee, while roping in ideas from Descartes, Dostoyevsky, Hegel, Hume, Socrates and Spinoza. It was topped off with a section called “A Clue From Poetry,” featuring Dante, Homer, Milton and Virgil. At one point he declared, “Descartes’ cogito ergo sum was not really necessary.”

Kissinger ends his latest book on a different note, one of humility—a trait that for most of his career he was better at humorously feigning than at actually possessing. “Long ago, in youth, I was brash enough to think myself able to pronounce on ‘The Meaning of History,’” he writes. “I now know that history’s meaning is a matter to be discovered, not declared.”

The key to Kissinger’s foreign policy realism, and the theme at the heart of his magisterial new book, is that such humility is important not just for people but also for nations, even the U.S. Making progress toward a world order based on “individual dignity and participatory governance” is a lofty ideal, he notes. “But progress toward it will need to be sustained through a series of intermediate stages.”

Isaacson is the CEO of the Aspen Institute and a former managing editor of Time. He is the author of biographies of Henry Kissinger, Benjamin Franklin, Albert Einstein and Steve Jobs, and of The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution, to be published in October


How Your Uber Rating Can Make You a Better Person

The Hamptons Lure Uber Top Drivers Amid NYC Slow Summer Weekends
Uber's ride request is displayed on an iPhone in New York, U.S., on Wednesday, Aug. 6, 2014. Victor J. Blue—Bloomberg / Getty Images

If we all thought we were subject to being reviewed by the people around us, we might work harder to be on our best behavior

One of the many cool things about Uber is that it allows passengers to rate the driver. Not surprisingly, the drivers are very friendly. A recent hack of Uber’s site highlighted that the sauce-for-the-gander reverse is also true: the drivers rate us passengers. Even better, for a while it was possible to crack Uber’s database and see the driver’s ratings for you. With enough cleverness, you could even pry loose how others were rated.

At first this might seem a creepy invasion of privacy, especially if you’re less than a five-star passenger. But there’s a virtue to losing your anonymity. Once you know you’re being rated, just like the driver, you’re likely to be a bit nicer, sit in the front, and make conversation. The world becomes slightly more civil.

Plato in The Republic writes about the Ring of Gyges, a mythical piece of jewelry that allows its wearer to become invisible. His question is whether such a person would always be moral when assured that no one could see him. Plato’s brother Glaucon says it’s obvious that we’re more likely to behave like a jerk, or worse, when we know that we’ll never be caught or called out.

Civil liberties junkies sometimes confuse the worthy ideal of privacy with anonymity, a less exalted concept. The idea that we should be able to interact with other people anonymously is rather new in human history. It arose when people started moving from tight-knit communities to urban areas where they were generally unknown. When I was a kid in New Orleans, if I went to the local store and bought a pack of Marlboros, it would get back to my parents in about five minutes. Now we can buy anything anonymously and, worse yet, say anything.

This has not elevated the civic discourse. If I could conjure up a magic Plato ring, it would allow me to know and publicly reveal the names and addresses of all people who anonymously post vulgar rants and racist tweets. I would use it only sparingly, but I suspect that just a few such revelations would make the Twittersphere and Blogosphere suddenly a bit more civil, or at least subdued.

In the early days of the online world, people who posted in virtual communities such as The WELL knew that, even if they were using a pseudonym, they were creating an online identity and reputation that was worth tending. A year ago, the Huffington Post began requiring people to register in order to comment, and it has elevated the discourse on the site.

When the Internet Protocol was created in the early 1970s, it did not grant total anonymity to users. To this day, unless you take special precautions, a good hacker – or the NSA – can track down your I.P. address, location, and even your true identity. If a few white-hat hackers or NSA leakers published the names and addresses of, say, the trolls who hounded Robin Williams’s daughter, there would understandably be an outcry among privacy (read: anonymity) advocates, but it would have the silver lining of muting some of the haters.

Likewise, if we all thought we were subject to being rated, we might work harder to be on our best behavior. In the world of Yelp and TripAdvisor and HealthGrades, we get to rate our restaurants and hotels and doctors. I hope that expands. College students rate their professors in many places; it would be nice to allow kids and parents to rate their high school teachers. In the sauce-for-the-gander category, if teachers and waiters and hotels were all rating us, it might feel a bit Orwellian. Nevertheless, it’s a cool Ring of Gyges thought experiment to imagine how much better we would behave.


aspen journal logo

Walter Isaacson, a former managing editor of TIME, is the president and CEO of the Aspen Institute, a nonpartisan educational and policy studies institute based in Washington, DC. A former chairman and CEO of CNN, he is the author of Steve Jobs (2011),Einstein: His Life and Universe (2007), Benjamin Franklin: An American Life (2003), Kissinger: A Biography (1992), and the forthcoming The Innovators (October 2014), as well as the coauthor of The Wise Men: Six Friends and the World They Made (1986). This article also appears in the Aspen Journal of ideas


Obama Can Still Secure His Legacy

If he plays his last two years like the final quarter and not the back nine

aspen journal logo

The article also appears in the Aspen Journal of ideas

A question that faces president Obama, however the midterm elections turn out, is whether he’s going to play his final two years as the back nine of a casual afternoon of golf, coasting toward the clubhouse of former presidents, or as the final quarter of a tight basketball game.

When I was working with Steve Jobs on a biography in 2009, he had an inkling that he might only have a couple of active years left. As his cancer kept recurring, instead of slowing him, it spurred him on. In those two years, he refined the iPhone and launched the iPad, thus ushering in the era of mobile computing.

President Obama has scored two monumental achievements: helping to restore the financial system after the 2008 collapse and making it possible for every American to get health care coverage, even if they leave their jobs or have preexisting conditions. Obamacare may be undermined if the Supreme Court guts subsidies for the federal exchanges. If so the sweeping nature of the reform will survive only if Obama mounts a rousing, state-by-state campaign to rally passion for protecting the new health benefits.

As for rescuing the economy, this could be remembered as a hollow victory unless the recovery restores economic opportunity for all Americans. Growing inequality—of income, wealth, and opportunity—is the economic, political, and moral issue of our time. The fundamental creed of America is that if you work hard and play by the rules, you can support your family with dignity and believe that your children will have an even better future. But that is being lost as the middle class continues to be hollowed out and the poor get left further behind.

From the Pope to Thomas Picketty, and from Paul Ryan to Rand Paul, there has been a renewed focus on the moral imperative of economic opportunity. Obama seems ready to make that the defining passion of his final two years. Fighting for a fair deal for every American goes to the core of what he believes, rounds out the narrative of his presidency, secures his historic legacy, and leads naturally into what is likely to be the mission of his post-presidency.

The foundation for such a crusade could be a simple goal, one with moral clarity and patriotic resonance: that every kid in this country deserves a decent shot. He’s got a fresh team in place, and he’s already proposed many elements of an opportunity agenda in his My Brothers’ Keeper Initiative and other speeches. Among them: Universal preschool, so that no child starts off behind. Quality after school activities and summer internships. Apprentice programs like the bill proposed by Senators Cory Booker and Tim Scott. What also could be included is a public-private effort to create a service year program so that every kid after high school or college has the opportunity to spend a year serving their country in a military or domestic corps.

I’ve been reading Doris Kearns Goodwin’s magisterial narrative of the Teddy Roosevelt era, The Bully Pulpit. In 1903, Roosevelt felt a fierce urge to energize the American people around what he dubbed his “Square Deal for every man, great or small, rich or poor.” He spent nine weeks crossing the country by train, delivering 265 speeches. Most were carefully-crafted explanations of why corporate trusts needed to be reined in and workers needed to be respected. But when he arrived at the Grand Canyon, he began adding passionate calls to protect the environment and preserve nature. The trip not only refreshed his presidency, it refreshed him personally. The old boxer relished not only the “bully pulpit” but also being “in the arena.”

It’s probably not feasible for President Obama to embark on a weeks-long whistle-stop tour barnstorming for a new Fair Deal and a dedication to preserving the planet, though it would sure be fun to watch. It’s hard to break through all of the static, but after the midterms, it may be possible for him to propound a narrative that ties together his proposals for economic opportunity, poverty reduction, and immigration. A vision of a land of opportunity would appeal to most Republicans as well as Democrats.

For the final two years of his term, President Obama could stay above the fray and recognize that it would be pointless, given the dysfunctional nature of Congress, to try to accomplish anything significant. A rational calculus of risks and rewards, and a sober assessment of the possibilities for accomplishing anything in Washington, would argue for that approach. But I can’t help but hope that he decides to race against the clock rather than run it out.

TIME politics

A Modest Proposal for Exploiting Corrupt Politicians

Why not have a more creative way of dealing with convicted pols?

Ray Nagin, the former mayor of my hometown of New Orleans, has just been sentenced to ten years in federal prison. He began his tenure in office by cracking down on corruption, but by the end of his two terms – after his feckless performance in the wake of Hurricane Katrina – he was taking kickbacks and payoffs from city contractors.

What he did was bad, so he deserves to be punished. Yet I cannot help wondering what good it will do to put a 58-year-old happily-married father of three in a prison for a decade. He’s certainly not a danger to the community. Nor is there much likelihood that he will commit such a crime again, if only because he’s not mayor anymore.

Perhaps the justification for having America’s taxpayers pay a fortune to keep him locked up is that it will serve as a deterrent to other officials who might be contemplating corruption. However, there is scant evidence of a deterrent effect, at least in my home state. The Oakdale federal prison where the judge recommended that Nagin be sent has recently served as a home for a whole motley troupe of sticky-fingered Louisiana former politicians: Congressman William Jefferson, Governor Edwin Edwards, Insurance Commissioner Jim Brown, and State Representative Girod Jackson III.

So it seems to me that, as with nonviolent drug offenders, we need some alternatives to prison for corrupt politicians. The whole field of alternatives to incarceration seems a bit lame these days, in need of an infusion of new ideas. Perhaps Louisiana could lead the way.

One idea would be to exile Louisiana’s steady stream of colorful corrupt politicians to an island in the marshlands, such as Grand Isle. This little Elba could become a tourist attraction, like the bird sanctuary and Tabasco factory on Avery Island. Visitors from around the world could pay to poke, feed, and photograph an authentic corrupt Louisiana politician.

Another idea would be to create a Corrupt Convicts Corps and put these fallen politicians to work. They could be confined by ankle bracelets to house arrest in the evenings, but during the day they could investigate the dealings of current politicians and sleuth out corruption. They’d likely be pretty good at it, since they know the tricks of the trade.

Convicted politicians would have to serve in the corps until they ferreted out and helped convict another corrupt officeholder, who would then take over that slot in the corps. This would assure that the corps would be continually replenished with younger and wilier corrupt politicians. Such a talented posse of enforcers might serve as a deterrent. It would also cut down on the costs of prisons and anti-corruption units.

TIME Media

The New York Times‘ New Boss

Dean Baquet New York Times
Managing Editor Dean Baquet is shown in this handout photo provided by the New York Times on May 14, 2014. Reuters

Former TIME and CNN chief Walter Isaacson on the new leader of the country's most influential newspaper

Dean Baquet — newly crowned executive editor of the New York Times — manifests a rare combination in journalism: he can be a tough reporter and also a nice person.

We worked together, in the 1970s, as fresh-faced junior reporters for a feisty New Orleans afternoon paper, the States-Item, soon to be folded into the Times-Picayune. Dean and I shared a workspace and often a byline. He was a dogged investigative reporter, and I tagged along. Once we wrote a blockbusting story together about a sketchy businessman who, we alleged, had been involved in arson, and he promptly sued us for libel. I was panicked, but Dean was sanguine; he knew that the U.S. Attorney in New Orleans was about to indict the guy a few days later.

Dean will be a great editor because good journalism is essentially a collaborative endeavor. Dean, with his friendly smile and deeply sympathetic soul, knows how to enlist people to work together, partner, cooperate, and collaborate. He’s a teambuilder.

This will be especially important at the New York Times, which is stocked with the industry’s most talented journalists but does not always win awards for newsroom morale.

Dean also knows that you can be an honest, hardnosed reporter without disliking the people you cover. He is essentially an optimist, which is why his smile is so natural.

When he left the Los Angeles Times, he showed that it was possible to stand on principle, not be forced to compromise your values, do it in a quiet and graceful way, and live to tell the tale.

There are some good lessons in the rise of Dean Baquet. 1. It’s possible to be both smart and kind. 2. Good leadership in the digital age requires fostering teamwork and collaboration. 3. It’s good to lead by example. 4. Nice people sometimes finish first.

Walter Isaacson, a former managing editor of TIME, is the president and CEO of the Aspen Institute, a nonpartisan educational and policy studies institute based in Washington, DC. A former chairman and CEO of CNN, he is the author of Steve Jobs (2011), Einstein: His Life and Universe (2007), Benjamin Franklin: An American Life (2003), and Kissinger: A Biography (1992), and coauthor of The Wise Men: Six Friends and the World They Made (1986).


How to Save Your Newspaper

Justin Sullivan—Getty

It's now or never for America's dailies. A former TIME managing editor offers a way to return journalism to prosperity: charge for it, a nickel at a time

You are getting a free preview of a TIME Magazine article from our archive. Many of our articles are reserved for subscribers only. Want access to more subscriber-only content, click here to subscribe.

This story has been modified from its original version.

During the past few months, the crisis in journalism has reached meltdown proportions. It is now possible to contemplate a time when some major cities will no longer have a newspaper and when magazines and network-news operations will employ no more than a handful of reporters.

There is, however, a striking and somewhat odd fact about this crisis. Newspapers have more readers than ever. Their content, as well as that of newsmagazines and other producers of traditional journalism, is more popular than ever–even (in fact, especially) among young people.

The problem is that fewer of these consumers are paying. Instead, news organizations are merrily giving away their news. According to a Pew Research Center study, a tipping point occurred last year: more people in the U.S. got their news online for free than paid for it by buying newspapers and magazines. Who can blame them? Even an old print junkie like me has quit subscribing to the New York Times, because if it doesn’t see fit to charge for its content, I’d feel like a fool paying for it.

This is not a business model that makes sense. Perhaps it appeared to when Web advertising was booming and every half-sentient publisher could pretend to be among the clan who “got it” by chanting the mantra that the ad-supported Web was “the future.” But when Web advertising declined in the fourth quarter of 2008, free felt like the future of journalism only in the sense that a steep cliff is the future for a herd of lemmings.

Newspapers and magazines traditionally have had three revenue sources: newsstand sales, subscriptions and advertising. The new business model relies only on the last of these. That makes for a wobbly stool even when the one leg is strong. When it weakens–as countless publishers have seen happen as a result of the recession–the stool can’t possibly stand.

Henry Luce, a co-founder of TIME, disdained the notion of giveaway publications that relied solely on ad revenue. He called that formula “morally abhorrent” and also “economically self-defeating.” That was because he believed that good journalism required that a publication’s primary duty be to its readers, not to its advertisers. In an advertising-only revenue model, the incentive is perverse. It is also self-defeating, because eventually you will weaken your bond with your readers if you do not feel directly dependent on them for your revenue.

When a man knows he is to be hanged in a fortnight, Dr. Johnson said, it concentrates his mind wonderfully. Journalism’s fortnight is upon us, and I suspect that 2009 will be remembered as the year news organizations realized that further rounds of cost-cutting would not stave off the hangman.

One option for survival being tried by some publications, such as the Christian Science Monitor and the Detroit Free Press, is to eliminate or drastically cut their print editions and focus on their free websites. Others may try to ride out the long winter, hope that their competitors die and pray that they will grab a large enough share of advertising to make a profitable go of it as free sites. That’s fine. We need a variety of competing strategies.

These approaches, however, still make a publication completely beholden to its advertisers. So I am hoping that this year will see the dawn of a bold, old idea that will provide yet another option that some news organizations might choose: getting paid by users for the services they provide and the journalism they produce.

This notion of charging for content is an old idea not simply because newspapers and magazines have been doing it for more than four centuries. It’s also something they used to do at the dawn of the online era, in the early 1990s. Back then there were a passel of online service companies, such as Prodigy, CompuServe, Delphi and AOL. They used to charge users for the minutes people spent online, and it was naturally in their interest to keep the users online for as long as possible. As a result, good content was valued. When I was in charge of TIME’s nascent online-media department back then, every year or so we would play off AOL and CompuServe; one year the bidding for our magazine and bulletin boards reached $1 million.

Then along came tools that made it easier for publications and users to venture onto the open Internet rather than remain in the walled gardens created by the online services. I remember talking to Louis Rossetto, then the editor of Wired, about ways to put our magazines directly online, and we decided that the best strategy was to use the hypertext markup language and transfer protocols that defined the World Wide Web. Wired and TIME made the plunge the same week in 1994, and within a year most other publications had done so as well. We invented things like banner ads that brought in a rising tide of revenue, but the upshot was that we abandoned getting paid for content.

One of history’s ironies is that hypertext–an embedded Web link that refers you to another page or site–had been invented by Ted Nelson in the early 1960s with the goal of enabling micropayments for content. He wanted to make sure that the people who created good stuff got rewarded for it. In his vision, all links on a page would facilitate the accrual of small, automatic payments for whatever content was accessed.

Instead, the Web got caught up in the ethos that information wants to be free. Others smarter than we were had avoided that trap. For example, when Bill Gates noticed in 1976 that hobbyists were freely sharing Altair BASIC, a code he and his colleagues had written, he sent an open letter to members of the Homebrew Computer Club telling them to stop. “One thing you do is prevent good software from being written,” he railed. “Who can afford to do professional work for nothing?”

The easy Internet ad dollars of the late 1990s enticed newspapers and magazines to put all of their content, plus a whole lot of blogs and whistles, onto their websites for free. But the bulk of the ad dollars has ended up flowing to groups that did not actually create much content but instead piggybacked on it: search engines, portals and some aggregators.

Another group that benefits from free journalism is Internet service providers. They get to charge customers $20 to $30 a month for access to the Web’s trove of free content and services. As a result, it is not in their interest to facilitate easy ways for media creators to charge for their content. Thus we have a world in which phone companies have accustomed kids to paying up to 20¢ when they send a text message but it seems technologically and psychologically impossible to get people to pay 10¢ for a magazine, newspaper or newscast.

Currently a few newspapers, most notably the Wall Street Journal, charge for their online editions by requiring a monthly subscription. When Rupert Murdoch acquired the Journal, he ruminated publicly about dropping the fee. But Murdoch is, above all, a smart businessman. He took a look at the economics and decided it was lunacy to forgo the revenue–and that was even before the online ad market began contracting. Now his move looks really smart. Paid subscriptions for the Journal’s website were up more than 7% in a very gloomy 2008. Plus, he spooked the New York Times into dropping its own halfhearted attempts to get subscription revenue, which were based on the (I think flawed) premise that it should charge for the paper’s punditry rather than for its great reporting.

But I don’t think that subscriptions will solve everything–nor should they be the only way to charge for content. A person who wants one day’s edition of a newspaper or is enticed by a link to an interesting article is rarely going to go through the cost and hassle of signing up for a subscription under today’s clunky payment systems. The key to attracting online revenue, I think, is to come up with an iTunes-easy method of micropayment. We need something like digital coins or an E-ZPass digital wallet–a one-click system with a really simple interface that will permit impulse purchases of a newspaper, magazine, article, blog or video for a penny, nickel, dime or whatever the creator chooses to charge.

Admittedly, the Internet is littered with failed micropayment companies. If you remember Flooz, Beenz, CyberCash, Bitpass, Peppercoin and DigiCash, it’s probably because you lost money investing in them. Many tracts and blog entries have been written about how the concept can’t work because of bad tech or mental transaction costs.

But things have changed. “With newspapers entering bankruptcy even as their audience grows, the threat is not just to the companies that own them, but also to the news itself,” wrote the savvy New York Times columnist David Carr last month in a column endorsing the idea of paid content. This creates a necessity that ought to be the mother of invention. In addition, our two most creative digital innovators have shown that a pay-per-drink model can work when it’s made easy enough: Steve Jobs got music consumers (of all people) comfortable with the concept of paying 99¢ for a tune instead of Napsterizing an entire industry, and Jeff Bezos with his Kindle showed that consumers would buy electronic versions of books, magazines and newspapers if purchases could be done simply.

What Internet payment options are there today? PayPal is the most famous, but it has transaction costs too high for impulse buys of less than a dollar. The denizens of Facebook are embracing systems like Spare Change, which allows them to charge their PayPal accounts or credit cards to get digital currency they can spend in small amounts. Similar services include Bee-Tokens and Tipjoy. Twitter users have Twitpay, which is a micropayment service for the micromessaging set. Gamers have their own digital currencies that can be used for impulse buys during online role-playing games. And real-world commuters are used to gizmos like E-ZPass, which deducts automatically from their prepaid account as they glide through a highway tollbooth.

Under a micropayment system, a newspaper might decide to charge a nickel for an article or a dime for that day’s full edition or $2 for a month’s worth of Web access. Some surfers would balk, but I suspect most would merrily click through if it were cheap and easy enough.

The system could be used for all forms of media: magazines and blogs, games and apps, TV newscasts and amateur videos, porn pictures and policy monographs, the reports of citizen journalists, recipes of great cooks and songs of garage bands.

This would not only offer a lifeline to traditional media outlets but also nourish citizen journalists and bloggers. They have vastly enriched our realms of information and ideas, but most can’t make much money at it. As a result, they tend to do it for the ego kick or as a civic contribution. A micropayment system would allow regular folks, the types who have to worry about feeding their families, to supplement their income by doing citizen journalism that is of value to their community.

When I used to go fishing in the bayous of Louisiana as a boy, my friend Thomas would sometimes steal ice from those machines outside gas stations. He had the theory that ice should be free. We didn’t reflect much on who would make the ice if it were free, but fortunately we grew out of that phase. Likewise, those who believe that all content should be free should reflect on who will open bureaus in Baghdad or be able to fly off as freelancers to report in Rwanda under such a system.

I say this not because I am “evil,” which is the description my daughter slings at those who want to charge for their Web content, music or apps. Instead, I say this because my daughter is very creative, and when she gets older, I want her to get paid for producing really neat stuff rather than come to me for money or decide that it makes more sense to be an investment banker.

I say this, too, because I love journalism. I think it is valuable and should be valued by its consumers. Charging for content forces discipline on journalists: they must produce things that people actually value. I suspect we will find that this necessity is actually liberating. The need to be valued by readers–serving them first and foremost rather than relying solely on advertising revenue–will allow the media once again to set their compass true to what journalism should always be about.

Isaacson, a former managing editor of TIME, is president and CEO of the Aspen Institute and author, most recently, of Einstein: His Life and Universe.

Your browser, Internet Explorer 8 or below, is out of date. It has known security flaws and may not display all features of this and other websites.

Learn how to update your browser