There Is Only One Question That Matters with AI

6 minute read
Ideas
McNamee began his Silicon Valley investing career in 1982. He was an early investor in Facebook and an advisor to Mark Zuckerberg. He is the author of the New York Times bestseller Zucked: Waking up to the Facebook Catastrophe.  

A group called Future of Life Institute has circulated a petition, signed by nearly 3,000 people in and around the technology industry, calling for a six-month moratorium on large scale experiments with artificial intelligence (AI). The petition has triggered a huge debate.

Those who have signed the petition note that developers of GPT-4 and other large language model AIs promise that their technology will change the course of civilization, but claims they have not taken appropriate steps to protect civilization from harm. They frame the future of AI in apocalyptic terms. Those who oppose the petition fall into two large buckets: those who are comfortable with the status quo of rapidly developing AI models and those who believe the petition sponsors are so focused on the future that they ignore widespread harms from existing applications of AI. The latter argument is particularly interesting, as the group includes leading technologists and scholars in the AI field, including Timnit Gebru, Emily Bender, and Margaret Mitchell.

We really do need a different approach to AI, the first step of which is acknowledgement that AI is just the latest manifestation of the Silicon Valley hype machine. AI is just code where the benefits fall far short of what the promoters promise, with much more harm. We have seen this movie repeatedly (Facebook, TikTok, etc) over the last dozen years, always with unhappy endings. It is past time we do something about large scale technological changes being rapidly imposed on society.

The question we should be asking about artificial intelligence – and every other new technology – is whether private corporations be allowed to run uncontrolled experiments on the entire population without any guardrails or safety nets. Should it be legal for corporations to release products to the masses before demonstrating that those products are safe?


More from TIME


Read More: The Only Way to Deal With the Threat from AI

The tech industry has been running uncontrolled experiments for more than a decade, across a wide range of product categories, frequently with catastrophic results. In 2012, for example, Facebook ran an experiment in which they made 155,000 people sad, without their knowledge, must less permission. Instagram, Snapchat, and TikTok were largely designed to create envy in teens, without consideration for psychological harms. Relatively limited applications of AI have already enabled civil rights violations in mortgage lending, resume review, and policing.

With large language model AIs, the scale of the experiments has changed, increasing by two orders of magnitude or more relative to early AIs. Chat GPT, a large language model that reached one billion total users – and 100 million active users – only two months after its introduction, has been called a “bullshit generator.” When Microsoft incorporated Chat GPT into its Bing search engine, the large number of factual errors triggered a tsunami of criticism. Despite the flaws, the integration of Chat GPT caused Bing’s daily user count to pass 100 million for the first time. Thanks to tacit and explicit endorsements from media and policy makers, millions of people have bought into the hype, embracing another dangerous tech product.

Even Sam Altman, the CEO of Open AI, has expressed concern about the risk from the technology he is creating. But rather than taking action to protect consumers, Altman is developing larger models as fast as he can.

Those who support the current approach to AI argue that we cannot afford to slow down because we are in a race with China. Really? How does flooding the information ecosystem with bad answers, disinformation, and civil rights violations help us compete with China? Historically, America is most successful when it focuses on its core values: democracy, the right of self-determination, creativity, and entrepreneurship. We win with jet aircraft, carbonated beverages (i.e. Coke), and entertainment. China’s wins when it can leverage its scale and authoritarian government. AIs based on high quality content, operated in a manner consistent with American values, would improve our competitiveness, but that is not the approach Silicon Valley is taking. They want to compete with China on China’s terms. That is crazy.

The harms of poorly developed AI have been discussed in public policy circles since at least 2017, but Congress and two presidents have done nothing. I know because I have been one of the people raising the alarm.

The problems with AI cannot be fixed with a six-month moratorium. What is required is a different approach to the development and deployment of new technologies, an approach that prioritizes protection of consumer safety, democracy, and other values over returns to shareholders. If we could wave a magic wand and change the culture of tech, there would be no need for a moratorium. And a moratorium without a clear path to better development practices accomplishes nothing. The industry has long treated self-regulation as license to do what it pleases.

AI has huge promise, and the technology is advancing rapidly, but it can only be as good as the content used to train it. Engineers have the option of training AIs on content created by experts, but few choose that path, due to cost. Instead, they train their systems on data scraped for free from the web, sometimes in violation of copyright law. AI developers scrape content from high quality sites like Wikipedia, but much more comes from sites that do not distinguish between information and disinformation. If you train an AI on low quality content, the results will also be low quality. Given the scale of products like Chat GPT and GPT-4, there is a great risk of flooding the internet with disinformation. A Google engineer resigned after claiming that his company had trained its own large language model AI, Bard, on Chat GPT.

So long as we build AIs on lousy content, the results are going to be lousy. AI will be right some of the time, but you won’t be able to tell if the answer is right or wrong without doing further research, which defeats the purpose.

For most of the past forty years, government and the people have given tech companies nearly total freedom with respect to product development. Consumers have adopted new technologies more or less blindly, despite escalating harms over the past fourteen years. In today’s regulatory vacuum, the incentives in tech are to maximize shareholder value, even if doing so undermines core values like public safety and democracy. Laissez-faire policies have led to the creation of great wealth for a relative handful of entrepreneurs and investors, but at enormous cost to society as a whole.

The window is closing to protect democracy and the public from increasingly harmful technology products and the culture that creates them. We cannot afford further inaction.

More Must-Reads from TIME

Contact us at letters@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.