AI is not some abstract, futurist thought experiment for the scholar Kate Crawford: it’s a series of cold, physical truths. For the past 20 years, Crawford has studied the impact that large-scale data systems have had on the environment, and how they are impacting our social and political systems. Crawford, who is 50 and is based in New York, has written books, co-founded the AI Now Institute to produce research on the concentration of power in the tech industry, and advised policymakers around the world. Her collaborative art piece Anatomy of an AI System, which explores the life cycle of an Amazon Echo smart speaker, is even on view at the Museum of Modern Art. (This interview has been condensed and edited for clarity.)
TIME: You wrote a book about the environmental impact of AI, called Atlas of AI. What were your main takeaways from writing it?
Kate Crawford: The big takeaway is that this idea of AI as being ethereal, mathematical algorithms in the cloud is absolutely not the case. In actual fact, the only way they work is by extracting vast amounts of data, human labor, and natural resources, which includes energy, water, and minerals. The book makes the case that AI is the extractive industry of the 21st century.
And with generative AI, it’s even more magnified. The amount of data has gone up. The amount of hidden human labor, particularly in terms of reinforcement learning with human feedback, has gone up. And the amount of energy and water used for generative AI is somewhere between 1,000 to 5,000 times more than traditional AI.
Why have you focused on training data in Knowing Machines, a research project that explores how datasets perceive the world?
The training-data layer is the foundational building blocks of how the world is being represented, and some cases not represented. It is the alphabet from which the stories are built.
In so many cases, we’ve come to this place in AI history where people just scrape the entire internet and say, “Well, that’s the world.” But there be dragons, if you claim the internet is what human culture is. It’s much more diverse than the internet would have you believe. So we’ve really reached this point where I think we need much greater care, much greater custodianship, and much greater critical awareness of how we train AI to represent the world.
Why is it important for you to incorporate art into your research and your output?
These issues are too important to keep within academic journals and within academic conferences, because fundamentally, these systems are remaking democracy. They’re remaking the planet, and they’re remaking how we understand each other and ourselves. Given the enormous social and political impact, this has to be a set of issues and questions that are as public as possible.
So when I collaborate with artists and create works about these sorts of questions, it’s in order to reach different audiences and inspire bigger public debates. For example, the collaboration I did with Vladan Joler, called Anatomy of an AI System, was giving people a snapshot to see how AI systems are really functioning across their entire life cycle. Beginning with the sort of birth of an AI system in the ground at the mineralogical layer: What’s happening with mining, smelting, container shipping, and supply chains? And then into the data pipelines. Where is data coming from? How is it being analyzed and understood?
And then all the way to the end of the life cycle, when these devices are thrown away generally in fewer than three years and thrown into e-waste dumps in Ghana and Pakistan, where they poison water supplies and create all kinds of environmental havoc.
What part of the AI ecosystem are you studying now?
I have a forthcoming book coming up next year. And a series of big research findings will be coming out of the Knowing Machines project, in which we’ve been studying generative-AI systems from a sociological perspective, a historical perspective, and a legal perspective, as well as a technical perspective. And so we’re bringing these kinds of four big communities together to study how generative AI is going to be having this enormous impact.
My belief is that generative AI is being underestimated in terms of the social and political impact it’s going to have. I say that in full knowledge of the fact that it’s in every newspaper and everyone’s talking about it. But I still think we’re thinking too small in terms of the changes that it’s going to produce.
At the most core, people fail to understand the way it is literally going to change the way that we see and understand the world. And that is a very foundational thing that goes beyond a question of deepfakes or its profound labor impacts.
- Donald Trump Is TIME's 2024 Person of the Year
- Why We Chose Trump as Person of the Year
- Is Intermittent Fasting Good or Bad for You?
- The 100 Must-Read Books of 2024
- The 20 Best Christmas TV Episodes
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- Merle Bombardieri Is Helping People Make the Baby Decision