• Ideas
  • Technology

The Optimist’s Case for How New Technology Can Transform the U.S.

9 minute read
Ideas

Pethokoukis is an economic policy analyst at the American Enterprise Institute, CNBC contributor, and author of “The Conservative Futurist: How to Create the Sci-Fi World We Were Promised.” You can also read his newsletter, “Faster, Please!” on Substack.

We live in an emerging age of technological signs and wonders. Among the tantalizing possibilities: Genetic medicine to cure Alzheimer’s and cancer. Reusable rockets to build an orbital economy and Moon colony. New kinds of nuclear reactors that are easier to build and could supply nearly limitless clean energy. And the next U.S. president just might step to the podium in the West Wing and announce that an American technology company has created an artificial intelligence as smart as the best human mind. Such a seemingly sci-fi advance could radically change the job market, government finances, scientific research, and, really, the entire American way of life—yet almost certainly for the better overall, as I explain in my new book, The Conservative Futurist: How to Create the Sci-Fi World We Were Promised.

No guarantees, however, that any of these marvels will happen anytime soon. History suggests skepticism is warranted. Last year marked the 50th anniversary of what I call the Great Downshift, a slow-motion economic catastrophe right up there with the Great Depression as the biggest disaster in American economic history. In 1973, U.S. labor productivity growth—worker output per hour, the critical factor driving long-term improvement in living standards—collapsed during a nasty two-year recession and never really recovered for any extended length of time except for the few years of the internet boom before and after the year 2000. If productivity growth had merely remained at the pace seen in the years before 1973, the size of the American economy would be roughly 50 percent larger than it is today, or $40 trillion rather than $26 trillion. And had it accelerated as some were predicting back then, Americans might be twice or three times as rich as they are today.

What exactly happened a half-century ago remains an economic mystery. Nothing so significant has a single cause. But one of the most probable suspects is simply that we failed to dream up new technological marvels anywhere near as economically important as those of the late 19th and early 20th centuries, such as the internal combustion engine and widespread electrification. The combination of the personal computer and internet was important, of course, but not enough on its own to return the economy to the productivity growth rates of the past. Unlike those great inventions of old, economist Robert Gordon has explained, the Information Technology Revolution generated benefits that were more narrowly focused—predominantly concentrated on entertainment and communications, such as advancements from VCRs to smartphones—and reached a point of diminishing returns more quickly. Productive progress in the world of atoms rather than bits, such as energy, housing, and transportation, has been more modest.

Yet when the future optimists of the immediate postwar decades—technologists, CEOs, think tankers, and “hard” science-fiction writers such as Arthur C. Clarke and Isaac Asimov—imagined the world of the early 21st century, they assumed advances of the sort we are only now seeing in AI, biology, energy, and space would be old news. We should also be well on our way to controlling the weather, mining asteroids, and, of course, zipping around in flying cars. Their biggest mistake wasn’t perhaps just underestimating the technical difficulties of achieving such advances. They also assumed that key factors enabling such leaps were a permanent feature of the American political economy: massive federal spending on R&D, an unfettered regulatory environment, and a broadly techno-optimistic culture supportive of such pro-progress policies no matter the downsides. In other words, a Neverending ’50 and ’60s.

“How fast technological progress goes depends on how much we, as a society, want it,” Arthur Turrell, a plasma physicist whose book The Star Builders: Nuclear Fusion and the Race to Power the Planet documents the effort to achieve nuclear fusion, told me soon after the energy breakthrough in December 2022. “Societal will and investment can speed up technology—just look at the development and deployment of vaccines during the [coronavirus pandemic]. Given that achieving fusion is perhaps the greatest technological challenge we’ve ever taken on as a species, funding for fusion to date has not been equal to the task. Progress could have been much faster.”

Read More: Nuclear Power Is the Only Solution

Well, better late than never. Generative AI models by themselves could return productivity growth to what it was before the Great Downshift, according to the bank Goldman Sachs[LW3] [JP4] , by automating some of what workers do and making them more efficient at doing other tasks. More gains could come from accelerating scientific research. Tamay Besiroglu, a visiting research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory, thinks AI automating crucial scientific and engineering tasks, like drug discovery and chip design, could potentially amplify productivity growth rates by five times or more by mid-century. If so, the American economy could be multiples bigger than Washington is currently expecting.

But it would be a potentially massive mistake to just assume a giant leap forward is guaranteed as did the optimists of the 1960s and late 1990s. For this broad technological revolution to reach its full potential as fast as possible, we need to avoid the mistakes of the past. We need to reform and repeal 1970s-era environmental regulations that make it maddeningly hard to build big projects in America today, either quickly or cheaply. Likewise, America needs to again spend at Project Apollo levels on the science research, or a tripling as a share of GDP, that drives technological progress and long-term economic growth. Doubling total national R&D spending, both public and private, could raise U.S. productivity and real per-capita income growth rates by 0.5 percentage points per year, according to Northwestern University economist Benjamin Jones.

But new technologies and better policies aren’t enough. With rapid progress comes disruption to the status quo. Humans have an innate bias against change, behavioral economists find, such as fearing big changes to a current job even if the result might be higher pay down the line. We need a culture that can help us imagine how tech progress and growth can create a future we would want to live in. If not, we’ll be reluctant to accept the inherent disruption—to our jobs, our companies, and our communities—that comes with change. But coincident with the Great Downshift in productivity growth was a long-term change of national attitude about the future, one for the worse. Yale University economist Ray Fair sees the early-1970s decline in U.S. infrastructure spending as a share of GDP and the start of persistent government budget deficits as suggesting that “the United States became less future oriented beginning around 1970. This change has persisted." The logic: Repairing your roof while the sun is shining and curbing spending before the bill collector calls require foresight and the ability to place the current you in the shoes of the future you.

But government stats only tell part of the story here. The other part is the actual stories we started telling ourselves. Influenced by an emerging environmental movement that was certain our techno-capitalist civilization risked collapse from overpopulation (see 1968's The Population Bomb) and overconsumption (see 1972's The Limits to Growth), Hollywood began delivering a steady diet of dystopia and never really stopped. Is it any wonder that three-fourths of Americans want to slow the development of AI when our cultural touchstone is The Terminator, a film franchise frequently mentioned in media stories about the technology? Is it any surprise that many young people suffer from “climate anxiety” when films and TV shows suggest we’re helpless to do much about it? Extrapolations, a recent big-budget, star-studded Apple TV+ series, depicts a wrecked future where national leaders could never get their act together. (No mention of nuclear power, by the way.)

Even the American president isn’t immune. Before issuing his new executive order on AI last October, Joe Biden watched the Tom Cruise film Mission: Impossible—Dead Reckoning Part One while spending the weekend at Camp David. The film’s villain is a sentient AI: “If [Biden] hadn’t already been concerned about what could go wrong with AI before that movie, he saw plenty more to worry about,” said White House aide Bruce Reed, who watched the film with the president.

Better pro-growth policy would help tremendously. A new study from Harvard University finds that people who grow up in times of economic abundance and growth tend to have a positive-sum mindset, believing the pie can expand for all. But those raised in tougher conditions are more zero-sum. They're skeptical that hard work brings success. But we also need to tell ourselves a more optimistic story of the future. Credit to Jeff Bezos for having Amazon Studios continue The Expanse series after it was canceled by the SyFy network. The show depicts humanity surviving climate change, colonizing the solar system, and living longer. More billionaires could also directly finance optimistic media like Elon Musk's Mars colonization videos. New AI tools will increasingly allow future optimists to create their own sophisticated images of the future if Hollywood won’t.

Our schools also have a role to play in creating a more risk-taking generation. They should assign classic adventures from Jules Verne, Jack London, and Laura Ingalls Wilder, and modern books like Andy Weir's The Martian about an innovative astronaut stranded on Mars. We could also devise pro-progress thought experiments like a "Genesis Clock" that tracks indicators of scientific and economic advancement as a conceptual counter to the famous Doomsday Clock. I, for one, am more interested in how many minutes to a Dawn of civilizational superabundance than a supposed Midnight of human extinction.

The conservative futurist Herman Kahn said humanity would be fine barring disastrous luck or incredibly bad decisions. Bad luck can create opportunity, however. Russia’s invasion of Ukraine and the difficulty of reducing carbon emissions with wind and solar alone have given new life to nuclear energy around the world. Tensions with China provide an excuse to treat and fund AI as the new Space Race.

After a half century of disappointment, I’ll take whatever help I can get if it means a future of radically reduced poverty, healthier lives, and a society resilient to existential problems, from climate change to pandemics to a stray chunk of space ice slamming into our planet. Flying cars would be nice, too.

More Must-Reads From TIME

Contact us at letters@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.