AI Poses Doomsday Risks—But That Doesn’t Mean We Shouldn’t Talk About Present Harms Too

6 minute read
Ideas
Jan Brauner is a Ph.D. researcher in Artificial Intelligence at the University of Oxford, focusing on the safety of AI systems and their application in public health.
Alan Chan is a Ph.D. student at Mila, a research institute in Montreal focusing on AI, where he works on technical and socio-technical problems in AI safety. He's particularly interested in evaluation and AI regulation.

In May, more than 100 artificial intelligence (AI) professors, including some of the field's leading experts, scholars from other areas, and AI executives signed a one-sentence statement: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Many signing researchers have explained their reason for worry: the risk that we might build powerful AI systems that we cannot control within this decade or the next ones.

Some have suggested the statement is an industry ploy to hype up AI companies’ products or influence regulation, ignoring that most of the signatories—ourselves included—are academics who do not work for industry. But the most common concern we have heard is that, by focusing on the risk of extinction, the statement distracts from present-day AI harms. As volunteers involved in collecting signatures for and constructing the statement, we disagree because both problems share a common core. The false dichotomy between ongoing harms and emergent risks makes it needlessly difficult to address common root causes.

AI poses many ongoing problems. AI systems can sustain human rights abuses, perpetuate systemic discrimination, and entrench power imbalances. Because AI systems are often widely deployed, harms can reach massive scales. For example, tens of thousands of families were pushed into poverty when the Dutch Tax and Customs Administration wrongly accused them of fraud and requested they pay back large sums of money, based on risk profiles created with AI systems.

The root causes of AI harm

There are several root causes that contribute to many instances of the harm caused by AI. One is that we do not know how to design AI systems that reliably behave exactly as we intend them to. For example, a health system was designed to allocate care based on need but discriminated against Black hospital patients. It turned out the algorithm used health care costs as a proxy for need, but institutionalized racism means that less money has been spent on Black patients historically. In many similar cases, companies do not explicitly set out to build harmful AI systems. Instead, creating state-of-the-art AI systems requires vast amounts of data, and the AI systems unintendedly inherit the biases encoded in this data.

Another cause is competitive pressures and limited accountability. There is often a strong economic incentive to deploy AI systems without sufficient safety checks. For example, Microsoft rushed the release of the Bing chatbot, which resulted in the chatbot threatening and insulting users. When the Dutch Tax Administration replaced human judgment with AI systems, this, too, happened in the name of efficiency.

Without decisive action to address these root causes, increasingly powerful and widely deployed AI systems will cause even more pervasive and significant harm. Many experts believe future AI systems may outmatch humans in crucial cognitive skills, such as social persuasion, planning, and development of novel technologies. Just as these cognitive abilities enabled humanity to control the fate of other species, the same skills could enable smarter AI systems to out-perform us. Given competition and limited accountability, there would be pressure on organizations to replace expensive human labor with AI systems. Economic forces encourage the rapid deployment of new AI systems (at the cost of safety testing), making them ever more powerful, building general (and potentially more powerful) systems rather than narrow (and potentially more controllable) systems, and increasing the autonomy of AI systems to take action in the world. AI systems could increasingly control our society's key productive, managerial, and military capacities. By then, we must have figured out how to design systems that always and truly behave as we intend. Otherwise, powerful AI systems that pursue goals at odds with humanity could lead to human marginalization or even extinction.

Read More: An AI Pause is Humanity's Best Bet for Preventing Extinction

Moreover, many researchers expect us to develop such advanced AI systems very soon. Companies are pouring more money and talent into AI development while the cost of computing power is decreasing. Simply using more computing power and data has reliably produced more capable AI systems, while AI algorithms have also improved. After seeing the rapid progress in AI over the past five years, Geoffrey Hinton, a founding father of modern AI, now expects that AI could broadly surpass human cognitive abilities—such as persuasion, planning, and scientific reasoning—within five to 20 years. Such time horizons are shared by many in the AI research community. 

Twenty years is short; in discussions about the climate crisis, we regularly consider effects on such timescales. And just as climate change requires us to face air pollution and ongoing natural disasters as well as prevent future, even more, extreme weather events, we must tackle various AI risks and harms simultaneously. Even if we might be a couple of decades away from AI systems that are smarter than humans, that is little time to create the complex global governance structures necessary for this technology. 

Work to address ongoing harms sometimes differs from work to manage extinction risk. We do not wish to erase nuance or potential differences between the perspectives. Yet, given common root causes, there is much room for collaboration and cooperation.

Four possible interventions

Many interventions could address the root problems. First, the public currently has little to no say over what models are built and how they are deployed. For example, those most affected by welfare systems have had little say in their automation. Private companies decide which risks are acceptable, as they have free rein over computing power and data, the key levers of AI development. Stronger democratic oversight over the supply of computing power and data would enhance public control over AI development and associated risks. For example, governments can monitor when large amounts of computing power are being used and for what purpose.

Second, a strong auditing regime, where independent third-parties would scrutinize the practices and development processes of AI labs, would help reduce risks overall. Auditing has already surfaced previously overlooked harms, such as bias in facial recognition, and could catch future failure modes like AI systems subverting human control or replicating autonomously. A standards body could ensure  AI systems, both before and after deployment, meet specified safety guidelines. Domestic agencies could enforce compliance, while an international organization would create universal standards. 

Third, we should require meaningful human oversight of critical AI decisions, and avoid very-high-risk use cases such as lethal autonomous weapons. Uncertainty about AI systems' current and near-future capabilities means we should proceed cautiously around their deployment in critical domains.

Fourth, we should rebalance the  funding going into AIs and urge companies to spend less on making them smarter while increasing funding into making them safer, more transparent, and studying their social impacts. While it is good that industry labs are funding such research, this funding is liable to change with market conditions. Much more public funding for non-industry groups is required.

The development of increasingly powerful AI systems poses increasingly vast societal-scale risks. The complexity of these risks requires cooperation between people with potentially different motivations but with a shared interest in ensuring that AI development proceeds in a just and beneficial way.

More Must-Reads From TIME

Contact us at letters@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.