There’s no shortage of concern about the speed with which some of the world’s top artificial intelligence research labs are rolling out new AI tools that could change the way we live and work. The release of generative AI tools like ChatGPT to the public has prompted consternation about privacy and the spread of misinformation and bias. Not long after Microsoft released an AI-powered version of its search tool, Bing, to a select group of users in February, Bing threatened a philosophy professor with blackmail.
A month later, some of the biggest names in tech signed an open letter urging the world’s leading artificial intelligence labs to pause training their super-powerful computer systems for six months, arguing that recent advances in AI present “profound risks to society and humanity.” In an op-ed in TIME the same day, Eliezer Yudkowsky, a decision theorist who leads the nonprofit Machine Intelligence Research Institute, urged the labs to shut down their research entirely. “Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die,” he wrote.
But Microsoft’s CEO Satya Nadella isn’t worried. Microsoft has put at least $13 billion into OpenAI, the lab that launched ChatGPT in November; both Microsoft and Google rolled out products in March that they say will use AI to transform work—Google with AI powered features in Workspace that auto generate images, write and rewrite documents, and sort email; Microsoft with Copilot, an AI-based tool that works across its 365 suite of software (Word, Excel, Outlook) to summarize meetings, draft emails, and analyze data.
In a May 8 interview, Nadella said Microsoft has done its best to mitigate unintended consequences of AI. Copilot, for example, does not create content or send emails without a human first entering data that guides the AI’s response. Such AI solutions, he says, will do our drudge work, freeing us to do the creative tasks where humans excel.
Read More: The AI Arms Race Is Changing Everything
While skeptics cite big concerns about job displacement with this new wave of AI, Nadella argues that AI can democratize access to new skills, helping displaced workers find new jobs. Anyone can use generative AI—essentially algorithms that can create new content, including emails, white papers, and even code—as a partner at work, even if they don’t have a traditional education. That means that more people can become software engineers or security analysts, because they’ll be working alongside skilled AI. Microsoft thinks the world can sustain a billion professional developers and that AI can help quicken that growth.
“That’s why AI is exciting,” he says. “We need new technology to help us with the economic growth that we enjoyed in the early part of the 20th century. What if we can have that type of economic growth this time around, but it’s much more even?”
It’s true that productivity growth has been stunted in the U.S. After rising at an average rate of 2.8% between 1948 and 1970, productivity growth slowed to just 1.4% in the last 15 years. One reason for this slowdown, put forward by the economist Robert Gordon, is that we lack the technological advancements that came before 1970, like indoor plumbing, commercial air travel and the widespread adoption of communications technology. (The U.S. also saw a boost of productivity in the 1990s after the adoption of the PC.)
Nadella thinks AI might bring productivity growth back to the halcyon days of the mid-20th century. Yes, there are risks to deploying AI, he says—jobs may be displaced, people may have to go through transitions, and AI may learn biases because the information it learns from comes from real people, who themselves have biases. But the benefits of AI could outweigh the risks.
Microsoft’s own data suggests that the way we work today is broken. Microsoft’s Work Trends Index, released on May 9, shows that nearly two in three people polled say they struggle with having the time and energy to do their own job. The index surveyed 31,000 workers across 31 markets in February and March of 2023, and found that across Microsoft’s 365 software, the average employee spends a whopping 57% of their time communicating, and the rest creating. As a result, creativity is lagging; nearly two in three managers say they are concerned about a lack of innovation or breakthroughs on their teams.
Of course, even if AI does alleviate some of the negative parts of people’s jobs, it might create new issues for much of the workforce in the process. AI can and probably will displace jobs. It can and probably will provide more misinformation into a world already full of it. It can and probably will exacerbate existing racial biases, since it learns from information already available in the world. And perhaps most pertinent for people already overwhelmed with information at work: it can and probably will create even more information for us to sift through.
“AI is both going to be responsible for helping us manage a lot more knowledge but also creating a lot more knowledge that we will have to handle,” says Rowan Curran, an analyst at Forrester who researches AI and machine learning.
Nadella contends that the benefits of AI—Microsoft’s AI, at least—will outweigh the potential negatives. After all, Microsoft put humans in the center of AI design by choice, he says, which in itself mitigates unintended consequences, he says. And, he notes, there is already healthy dialogue between governments and companies about what kind of regulation there should be on AI. “At the end of the day, I think we will all be judged by one thing alone, which is, “Do the benefits far outweigh anything that are the societal consequences?”
That said, Microsoft has actively resisted European Union efforts to regulate the use of AI. The EU’s Artificial Intelligence Act could ban certain uses of AI like facial recognition technologies, but Microsoft and Google have lobbied to ensure that “general purpose AI” of the kind that ChatGPT is based on is not covered in the regulations. Nadella said he was unfamiliar with what Microsoft may or may not be doing in Europe. He also refused to commit to any limits on the use of AI in military applications—relevant because competitor Google had to forgo a contract with the military that used AI to interpret video images and could have been used for drone strikes after thousands of workers signed a letter in protest.
Of course, we still don’t know the societal consequences of relatively unchecked AI usage. Critics say they could be dire, and that even AI developers do not understand the possibilities. Nadella says that this type of complexity or uncertainty aren’t unusual—we study science because we don’t fully understand it, after all.
Today’s AI, he argues, fulfills the promise of the modern computer industry, which was founded in part to create tools to help the human mind understand more of the world around it. Way back in 1945, for example, in a seminal 1945 essay in The Atlantic, Vannevar Bush, an engineer who led the U.S. Office of Scientific and Research Development, suggested that further human advancement could only come if man mechanized his records more completely.
AI may be the next step in this development, Nadella says, arguing that calls on companies to stop working with AI are, in some ways, calls to stop embracing economic growth. And this growth, because it’s so accessible, doesn’t have to be limited to Silicon Valley or even the U.S., but can be more widespread, he says.
“Creating technologies that allow us as humans to be able to increase our knowledge, do science, and help the human condition is what has been core to enlightenment,” he says. “And so therefore trying to say, ‘Well, now is the time to stop,’ that doesn’t seem the right approach.”
It may seem surprising that Microsoft, which not long ago seemed like it was falling behind competitors to its hoary Word and Outlook offerings, is talking about providing products that are the “core to enlightenment.” In the 2010s, as Apple and Google introduced new operating systems including web-based apps that stole Microsoft customers, some analysts were writing off Microsoft as a doomed business because of its focus on software that could easily be replaced, says Jason Wong, an analyst at Gartner. But Nadella, who took the helm in 2014, recognized the value of the cloud, and of artificial intelligence, Wong says. Microsoft’s share price has risen nearly tenfold since Nadella took the helm; the S&P 500 has merely doubled its value over the same time.
The AI arms race is just beginning. For now, though, AI is showing up in something not very intimidating and perhaps even a little disappointing: applications like Word, Outlook, and Excel, the productivity software that many people have used for years and that would certainly be hard to imagine taking over the world and killing us all.
That said, researchers have admitted that they don’t always understand why generative AI produces the results it does. As Microsoft lifts the lid off of its AI, releasing Copilot to more customers (its initial launch was for 20 enterprise customers) and integrating more AI like the open image generator DALL-E into PowerPoint, what is so scary is that no one—not even Satya Nadella—knows what is going to happen next.
- Taylor Swift Is TIME's 2023 Person of the Year
- Meet the Nation Builders
- Why Cell Phone Reception Is Getting Worse
- Column: It's Time to Scrap the Abraham Accords
- Israeli Family Celebrates Release of Hostage Grandmother
- In a New Movie, Beyoncé Finds Freedom
- The Top 100 Photos of 2023
- Want Weekly Recs on What to Watch, Read, and More? Sign Up for Worth Your Time