Illustration by Kathleen Fu for TIME

Just before ChatGPT was placed before the public in November 2022, OpenAI’s head of sales was informed that the company would be quietly releasing a “low-key research preview,” which wouldn’t affect sales. Over 180 million users later, it’s fair to say that forecasting the world of AI is difficult.

But it wasn’t only ChatGPT’s success that was hard to foresee. An escalating AI race between companies and between countries; a U.S. Senate forum on the topic of “Doomsday Scenarios”; a dramatic boardroom ouster at the world’s most prominent AI company—these events would have been extremely difficult to anticipate a year ago.

AI’s rapid technological advancement—and the wild and varied reactions to it—make predicting the future of the field not for the weak of heart. But TIME spoke with five experts, who, undaunted by the task, bravely shared their ideas about the year ahead in AI.

Electricity-Hungry Data Centers

In 2023, the semiconductor-chip shortage became the first physical manifestation of the AI boom. In 2024, electricity demands will become the second, predicts Dan Hendrycks, executive director of the Center for AI Safety, a San Francisco–based nonprofit.

Data centers account for roughly 1% of the world’s electricity usage. In Ireland—which large tech companies favor partly for its low tax rates—data centers use almost a fifth of all electricity. Around 20% of global data-center capacity is currently used for AI. This proportion is likely to increase sharply in 2024, as AI systems are trained and run on ever larger amounts of computational power.

Companies will try, and indeed are already trying, to make deals with governments to secure a power supply, Hendrycks suggests. “You need the support of the government in some capacity to be getting that level of electricity. I won’t say who, but some of these AI companies will speak with leaders of these states and try and make agreements about energy, because their energy needs will just keep growing so substantially.”

Because oil is one of the most straightforward ways to power data centers, oil-rich countries in the Middle East, with their willingness to “pour money in weirder investments,” are likely to become more significant in the global competition for AI superiority, says Hendrycks. “In the way that we think of the U.S. and China as being relevant, probably the third most relevant [region] would be the Middle East,” Hendrycks says of the AI landscape next year.

The Coming “Trough of Disillusionment”

AI hype may have reached its high-water mark in 2023, says Rumman Chowdhury, CEO and co-founder of Humane Intelligence, an AI-testing nonprofit. “We will hit the trough of disillusionment in 2024,” she predicts. “We’re going to realize that this actually isn’t this earth-shattering technology that we’ve been made to believe it is.”

The most capable AI models are “immense computational achievements,” and the next generation is likely to be more capable still, she says. But the hype will collapse because no one has “figured out what large language models are useful for.”

Still, the AI hype has been beneficial in that it’s spurred policymakers into action, with immediate consequences for technologists, says Chowdhury. “Everybody’s going to pay attention to what’s happening in politics in a way that nobody has before.” For example, although the E.U.’s comprehensive AI law—the E.U. AI Act—won’t come into force until 2026, companies will need to start preparing now, she says. “Two years is going to pass very, very quickly in policy land.”

Building Bigger Models

In December 2023, Google DeepMind announced its latest AI model, Gemini Ultra. Google DeepMind did not reveal the amount of computational power, or “compute,” used to train the model, but Epoch, an AI forecasting organization, estimates it was trained using 90 septillion floating-point operations—likely more than any other model yet built.

Gemini Ultra slightly outperformed OpenAI’s GPT-4, which Epoch estimates was trained using roughly one-fourth as much compute. Researchers have mapped the relationships between the amount of compute used to train a model and the model’s ability to complete a given task, like predicting the next word, and have found that exponential increases in training compute will result in linear increases in performance. In other words, Gemini Ultra is about as good as researchers would have predicted, says Epoch’s director, Jaime Sevilla.

Companies will release larger models in 2024, which will be incrementally better again, he says. Chatbots will make fewer mistakes, for example. But “from the user’s perspective, it’s not gonna be like ‘Wow, this is like a totally new game-changing capability.’”

OpenAI CEO Sam Altman expects something similar—speaking on the Joe Rogan podcast in October, he predicted that the public reaction to future AI models would be akin to that for new iPhone releases.

Getting Down to Details

In the past decade, governments and companies churned out countless lists of ethical principles and strategies. But in 2023, spurred by the release of ChatGPT, things have become more definite, says Inioluwa Deborah Raji, a technologist at the Mozilla Foundation, a global internet freedom nonprofit. “It seems as if there’s finally this shift toward more concreteness … I think that that’s long overdue.”

Raji hopes that will continue. “I think things will become even more concrete. It would be disappointing if we rolled back on some of the progress we made in 2023.”

However, the policy response, because it was catalyzed by the release of OpenAI’s ChatGPT, has been overly focused on generative AI, says Raji. “There has been very little attention paid toward facial recognition, risk assessment, even some of the online-recommendation AI technologies underlying various platforms.”

Fortunately, President Biden’s Executive Order instructed government agencies to develop plans for dealing with AI. For example, the order requires the Department of Health and Human Services to publish a plan that addresses the use of AI in public services and benefits. This is the kind of detailed, unglamorous work that is required, argues Raji. “Hopefully next year, the agencies and the domain-specific regulators will have built a little bit more awareness.”

A Growing Divide

The International Telecommunication Union estimates that around 2.6 billion people—roughly a third of the world’s population—can’t access the internet. This digital divide may define who can benefit from AI, worries Bolor-Erdene Battsengel, a researcher at the University of Oxford and the former Mongolian Vice Minister of Digital Development and Communications. “We have a lot of existing inequalities—education inequality, income inequality, gender inequality. If we add the digital divide, the inequality gap will be impossible to narrow down.”

Even when users in developing countries are able to access AI, it is rarely developed with their needs in mind, says Battsengel. “The technology is being developed, or algorithms are being written, by engineers who are mostly from the U.S. or from the developed countries.” The response on the part of the wealthier countries developing AI has so far been inadequate, says Battsengel. “I really haven’t seen any initiative, in terms of ensuring inclusivity, equality from AI’s main stakeholders yet. I’m really hoping that there will be.”

Perhaps most concerning is the threat that AI-generated misinformation could pose to democracy: 2024 is sizing up as the most significant election year in modern history, and elections in Bangladesh have reportedly been disrupted already. “Deepfakes will be used enormously, adding to existing misinformation and disinformation,” predicts Battsengel. “One of the things I really hope to see from the main tech stakeholders is: What is the technical way to prevent that, or at least recognize that it’s a deepfake?”

More Must-Reads from TIME

Write to Will Henshall at

For Antony Blinken, the War in Gaza Is a Test of U.S. Power
Idris Elba on Why Supporting Small Farmers Will Help Fight Poverty
Ajay Banga Wants the World Bank to Work as a 'Knowledge Bank' to Reduce Poverty
To Curb Mass Migration, Help People Thrive Where They Are
In Conversation with World Economic Forum Founder Klaus Schwab on Global Powers, War, and AI