Felicia Reed Photography

Kay Firth-Butterfield has worked on the intersection between accountability and AI for over a decade and is excited about the future. “I’m not an AI pessimist. I believe that if we get it right, it can open so many beneficial doors,” she says. But she’s still cautious. After doctors diagnosed her with breast cancer last year, she was grateful they did not rely too heavily on AI, though it’s increasingly used to evaluate mammograms and MRIs, and even in planning treatment. While Firth-Butterfield, who is now cured, worried less about whether a machine was reading her mammogram, she noted an over-reliance on current AI models can be problematic as sometimes they present incorrect information. Her surgeons agreed, she says.

A former judge and professor, Firth-Butterfield has emerged as one of the world’s leading experts on responsible AI, shaping efforts to ensure these systems remain accountable and transparent. Last April, she ended a five-and-a-half year stint as the head of AI and Machine Learning at the World Economic Forum, where she crafted frameworks and playbooks for companies, countries and other organizations to steer responsible development and use of AI. Her work advising the U.K. and Brazil on creating such AI systems made its way into law. “If you're a government and you're using artificial intelligence with your citizens, then you have to be able to explain to your citizens how it is being used,” she says. In 2016, Firth-Butterfield co-founded the Responsible AI Institute, which provides tools for organizations to build safe and reliable AI systems, and she serves on a council advising the U.S. Government Accountability Office on AI matters related to science and technology, and on an advisory board for UNESCO’s International Research Centre on AI.

Nowadays, she also runs Good Tech Advisory—working with corporations, governments, NGOs and media to implement AI responsibly. That means helping set up guidelines for the use of AI-reliant technology to minimize potential harm, while maximizing benefits and ensuring legal compliance.

As CEO of Good Tech Advisory, Firth-Butterfield has been helping hospitals in the U.S. navigate AI’s potential uses, including for reading medical images and determining diagnoses. Many don’t have clear guidelines about how staff can use programs like ChatGPT, even as Firth-Butterfield points out these tools can often provide inaccurate information. “Those companies are wrestling with some really serious responsible AI choices,” she says. Doctors using AI to efficiently type notes and handle administrative tasks can allow more time for patient care. But relying on AI to come up with a diagnosis in high pressure situations could be dangerous. And if a patient becomes sicker or dies, the question of who is liable becomes an issue.

When AI is not used responsibly, people can get hurt—and it’s disproportionately women and people of color, Firth-Butterfield notes. Biased algorithms could prevent a worker from getting hired, unfairly reject mortgage applications or make incorrect decisions about security threats based on facial recognition, for example. 

At the core of Firth-Butterfield’s advocacy is understanding how AI impacts the most vulnerable members of society. At the WEF, she worked with UNICEF to research the use of AI with children, and organized a Smart Toy Award that urged thoughtful implementation. “We are allowing our children to play with toys that are enabled by artificial intelligence but we have no understanding of what our children are learning…or where their data is going,” she says.

Forbidding AI from being used in toys or classrooms as a way to protect children from its potential risks isn't the answer, says Firth-Butterfield. “We do need children to be using AI in education because they're going to be using it in their work. So we have to find a responsible way of allowing that interaction between machine and human,” she says. But teachers need to stay in charge. “We can’t just give education to AI; we need to keep humans in the loop,” she says. Teachers might rely on AI for back-end administration, freeing up time to focus more on helping their students. 

It’s crucial to pay close attention to how the systems are constructed, but Firth-Butterfield is also concerned about who gets to participate. While more than 100 million people use ChatGPT, almost 3 billion people still lack access to the internet. “We are increasing the digital divide at a huge rate—not just between the Global North and the Global South but also within countries,” she says. Although AI has the potential to revolutionize teaching in schools and the treatment of medical patients, much of the world may not feel its effects. “We tend to sit in our ivory towers talking about how AI is going to do everything brilliantly and we don’t remember that much of the world hasn’t been part of the internet revolution,” she says.

Our future is at stake in these decisions about how people use and rely on AI, she says: “It’s about whether we as humans build the society that we want.”

This profile is published as a part of TIME’s TIME100 Impact Awards initiative, which recognizes leaders from across the world who are driving change in their communities and industries. The next TIME100 Impact Awards ceremony will be held on Feb. 11 in Dubai.

More Must-Reads From TIME

Write to Sanya Mansoor at sanya.mansoor@time.com.

EDIT POST