On Wednesday, luminaries in the field of AI gathered at Serpentine North, a former gunpowder store turned exhibition space, for the inaugural TIME100 Impact Dinner London. Following a similar event held in San Francisco last month, the dinner convened influential leaders, experts, and honorees of TIME’s 2023 and 2024 100 Influential People in AI lists—all of whom are playing a role in shaping the future of the technology.
Following a discussion between TIME’s CEO Jessica Sibley and executives from the event’s sponsors—Rosanne Kincaid-Smith, group chief operating officer at Northern Data Group, and Jaap Zuiderveld, Nvidia’s VP of Europe, the Middle East, and Africa—and after the main course had been served, attention turned to a panel discussion.
The panel featured TIME 100 AI honorees Jade Leung, CTO at the U.K. AI Safety Institute, an institution established last year to evaluate the capabilities of cutting-edge AI models; Victor Riparbelli, CEO and co-founder of the UK-based AI video communications company Synthesia; and Abeba Birhane, a cognitive scientist and adjunct assistant professor at the School of Computer Science and Statistics at Trinity College Dublin, whose research focuses on auditing AI models to uncover empirical harms. Moderated by TIME senior editor Ayesha Javed, the discussion focused on the current state of AI and its associated challenges, the question of who bears responsibility for AI’s impacts, and the potential of AI-generated videos to transform how we communicate.
The panelists' views on the risks posed by AI reflected their various focus areas. For Leung, whose work involves assessing whether cutting-edge AI models could be used to facilitate cyber, biological or chemical attacks, and evaluating models for any other harmful capabilities more broadly, focus was on the need to “get our heads around the empirical data that will tell us much more about what's coming down the pike and what kind of risks are associated with it.”
Birhane, meanwhile, emphasized what she sees as the “massive hype” around AI’s capabilities and potential to pose existential risk. “These models don’t actually live up to their claims.” Birhane argued that “AI is not just computational calculations. It's the entire pipeline that makes it possible to build and to sustain systems,” citing the importance of paying attention to where data comes from, the environmental impacts of AI systems (particularly in relation to their energy and water use), and the underpaid labor of data-labellers as examples. “There has to be an incentive for both big companies and for startups to do thorough evaluations on not just the models themselves, but the entire AI pipeline,” she said. Riparbelli suggested that both “fixing the problems already in society today” and thinking about “Terminator-style scenarios” are important, and worth paying attention to.
Panelists agreed on the vital importance of evaluations for AI systems, both to understand their capabilities and to discern their shortfalls when it comes to issues, such as the perpetuation of prejudice. Because of the complexity of the technology and the speed at which the field is moving, “best practices for how you deal with different safety challenges change very quickly,” Leung said, pointing to a “big asymmetry between what is known publicly to academics and to civil society, and what is known within these companies themselves.”
The panelists further agreed that both companies and governments have a role to play in minimizing the risks posed by AI. “There’s a huge onus on companies to continue to innovate on safety practices,” said Leung. Riparbelli agreed, suggesting companies may have a “moral imperative” to ensure their systems are safe. At the same time, “governments have to play a role here. That's completely non-negotiable,” said Leung.
Equally, Birhane was clear that “effective regulation” based on “empirical evidence” is necessary. “A lot of governments and policy makers see AI as an opportunity, a way to develop the economy for financial gain,” she said, pointing to tensions between economic incentives and the interests of disadvantaged groups. “Governments need to see evaluations and regulation as a mechanism to create better AI systems, to benefit the general public and people at the bottom of society.”
When it comes to global governance, Leung emphasized the need for clarity on what kinds of guardrails would be most desirable, from both a technical and policy perspective. “What are the best practices, standards, and protocols that we want to harmonize across jurisdictions?” she asked. “It’s not a sufficiently-resourced question.” Still, Leung pointed to the fact that China was party to last year’s AI Safety Summit hosted by the U.K. as cause for optimism. “It’s very important to make sure that they’re around the table,” she said.
One concrete area where we can observe the advance of AI capabilities in real-time is AI-generated video. In a synthetic video created by his company’s technology, Riparbelli’s AI double declared “text as a technology is ultimately transitory and will become a relic of the past.” Expanding on the thought, the real Riparbelli said: “We've always strived towards more intuitive, direct ways of communication. Text was the original way we could store and encode information and share time and space. Now we live in a world where for most consumers, at least, they prefer to watch and listen to their content.”
He envisions a world where AI bridges the gap between text, which is quick to create, and video, which is more labor-intensive but also more engaging. AI will “enable anyone to create a Hollywood film from their bedroom without needing more than their imagination,” he said. This technology poses obvious challenges in terms of its ability to be abused, for example by creating deepfakes or spreading misinformation, but Riparbelli emphasizes that his company takes steps to prevent this, noting that “every video, before it gets generated, goes through a content moderation process where we make sure it fits within our content policies.”
Riparbelli suggests that rather than a “technology-centric” approach to regulation on AI, the focus should be on designing policies that reduce harmful outcomes. “Let's focus on the things we don't want to happen and regulate around those.”
The TIME100 Impact Dinner London: Leaders Shaping the Future of AI was presented by Northern Data Group and Nvidia Europe.
More Must-Reads from TIME
- Why Trump’s Message Worked on Latino Men
- What Trump’s Win Could Mean for Housing
- The 100 Must-Read Books of 2024
- Sleep Doctors Share the 1 Tip That’s Changed Their Lives
- Column: Let’s Bring Back Romance
- What It’s Like to Have Long COVID As a Kid
- FX’s Say Nothing Is the Must-Watch Political Thriller of 2024
- Merle Bombardieri Is Helping People Make the Baby Decision
Contact us at letters@time.com