The media theorist and author on how AI is modeling itself after human behavior.
Few people understand the internet better than Douglas Rushkoff. Lots of people have figured out how to make money online, or how to build influence or create political momentum. But as a media theorist, author, and professor, Rushkoff has built his career helping us all understand not just what we do on the internet, but what the internet is doing to us.
He’s a Professor of Media Theory and Digital Economics at City University of New York in Queens. And after writing more than 20 books, and hosting over 250 episodes of his Team Human podcast, Rushkoff has emerged as a biographer of the internet, chronicling the evolution of the digital landscape where most of us spend most of our time. In fact, many of the phrases we use to talk about the internet— from “viral media” to “social currency”— were coined by Rushkoff himself.[time-brightcove not-tgx=”true”]
Which is why Rushkoff is the perfect person to help us understand this unique moment in time, when the proliferation of AI technology is forcing us to interrogate everything we know about how humans and computers interact.
And this happens to be an especially exciting week for me to share this conversation with you all. Earlier today, TIME released its first-ever TIME100 AI list, which highlights the leaders, policymakers, artists and entrepreneurs advancing major conversations about how AI is reshaping the world.
Tune in every Thursday, and join us as we continue to explore the minds that shape our world. You can listen to the full episode in the player above, but here are a handful of excerpts from our conversation, which have been condensed and edited for clarity.
On how technology defines our worldviews from a young age:
The way that you make sense of the world is largely defined by the media environment that you grew up in—when you’re 5, 6, 7, 8 years old. If you’re raised in a scribble world, and that’s the way you make sense of the world, it’s going be different than if you’re raised in a printing press world or a radio world or a television world.
So we’ve moved recently from a television media environment to a digital media environment, and they’re really different. Television is the whole world together watching the moon landing.Digital is very different. Digital is discrete. It’s everything’s broken up into its own thing.
On why “tech bros” devalue lived experience:
Saying, ‘Books are for suckers because you could get it in 600 words,’ is like saying, ‘”ife is for suckers because you could get it in 6,000 words.” Are you reading a book for the data?
Because ultimately they believe that digital representation is more important than matter. That’s an “ends justifies the means” logic to discount the experience of everything that’s actually happening now, for a digital fantasy of a tech-bro future.
On how humans can raise good AI:
These are not thinking things. They’re probability engines. They are creating the most probable responses of words based on what’s happened. They are language models. And the punchline of this is if they really are using our behaviors to model what they do, and how they interact with us, and what they’re going for, then the only way to raise good AIs is for us to start acting good ourselves.
It’s really what it comes down to. When you’re raising a kid, it’s not what you tell your kid. It’s what your kid witnesses.
Doesn’t matter if you tell them, “Be nice to other people.” They do what you do. Same with AIs. They don’t care what we tell them. They’re using the entire database of every response that we’ve had, and in online situations, mind you, they don’t see the real world. They see the internet. So how we behave on the internet is how we’re teaching them to be. And that’s a little scary, right?
Especially if we give them more and more power. If we let them decide who gets to live in this neighborhood? Who gets this loan? Who gets this job? How long is my prison sentence? Those decisions that we’re already giving these algorithms. That’s of concern to me if they’re using the data set of who we are.