Ethan Mollick

Author, Co-Intelligence

7 minute read

While many discussions about artificial intelligence speculate on its future, Ethan Mollick is focused on how people can wield AI tools right now. Mollick is a Wharton professor and an author whose book Co-Intelligence, is a New York Times best-seller. He also writes a newsletter, One Useful Thing, which has 175,000 subscribers and aims to help the average person get better at using AI in their daily lives. Mollick argues that people should adopt AI into every facet of their life to learn where it can be used best.

TIME spoke with Mollick about getting primary use cases, bias, and getting Rickrolled by Claude. Here are excerpts from the conversation. 

One of the key principles you talk about is that people should “always invite AI to the table.” What do you mean by that? 

There are a couple things about AI that make it super weird to work. One, it's jagged: it's really good at some stuff you wouldn't expect, and really bad at other stuff. Second, nobody can tell you in advance what it's good or bad for. Everyone's waiting for this instruction manual that's not forthcoming. 

So the key is experimentation. People always ask, “where do I start?” The answer is  you start with what you do in your life. You bring it to all your tasks, and you'll learn what it’s good or bad at very quickly. You'll quickly find out, “Oh, it's really great for transcription, and I like asking it questions, but I wouldn't trust it to write my introduction.”

That'll be different for everybody. My rule of thumb is that you need about 10 hours with a frontier model to get your hands around it. 

What have you used AI models for in the last 24 hours?

All sorts of stuff, big and small. Let's take really the smallest possible use: I copied over some text of a PDF, and the capitalization and spacing and line breaks got all messed up. Claude fixed it. It saved me five or six minutes that would otherwise have absolutely been a waste of my time.

On the play side, every time I have an idea, I throw it in. One I tried yesterday was, ‘Can the AI Rickroll me?’ It's actually an interesting test, because Rickrolling is about deception. It turns out, Claude does a very good job of Rickrolling

And then not to mention the: ‘Hey I’m making this meal, what would go well? What sauce should I make quickly that will go well with this based on what's in my fridge?’ 

We're also working with it on a request for proposal (RFP) that's going out for an open source software project we’re doing research on. And it fixed a lot of the RFP. So it’s a lot of stuff all at once. 

Given all of these tasks it can complete, do you feel like AI is very underutilized in the general public? 

Yeah. A lot of the debate ends up being cosmic: “Will humanity perish or achieve transcendence?” That's an important question. But to me, it's the medium or short term effects, in areas like education and work, that are really interesting. And there's a lot of capabilities there to help us. 

The first question doesn’t have to be, “Can we replace all the teachers?” It's a bad question to ask. What we should be saying is, as a teacher, how would you use this to help things out, and how do we know it's helping? I think it is vastly underused in these cases, because there's a stigma against using these systems.

What types of people is AI helping the most right now? 

These systems work like people, even though they're not people. So those of us who are good at understanding others are often really good at working with AI. It's much more like teaching or managing or instructing than it is about coding. Software developers, on the other hand, struggle: They're used to deterministic systems.

One of the major short-term concerns around AI is about bias. Have you seen biases manifest in your experimentation? 

I haven’t seen it as much, but that's not a good indicator, necessarily. But bias is often subtle, and the AI is not going to be overtly racist or overtly sexist, because they have training systems built around them. However, we know from research that, for example, if you ask the AI to write a letter of recommendation for a woman, it's more likely to mention she's warm, and if it's a man, it's more likely to mention that he's competent. And by the way, if you tell it to be unbiased, the bias level drops. 

We don't really know the full set of biases and the ways of solving these biases are quite crude right now. It's something you have to be really cautious of, and learn when to use the system and when not to. I would never use it in decision-making capacity for hiring for a job, because we know it's biased in those cases. 

But for helping with my writing, or having a dialogue with a student: It's very unlikely to trigger these kinds of bias issues. And so you have to make decisions about where the risk is. This is not an un-risky system. But humans are risky to work with, too. 

You’ve also researched how some AIs can actually sniff out misinformation better than humans. Do you think the fears around AI spreading misinformation are overblown? 

Everything is going to happen all at once. Let's take one paper as an example of both problem and solution, which found that one of the few things that robustly lowers belief in conspiracy theories is getting into a debate with an AI. That's amazing.

So on one hand, we can lower conspiracy beliefs. But on the other hand, we have an AI that can persuade people that their deeply held beliefs are wrong. It’s the same tool, the same effect being used for evil and good.

A recent study found that 77% of employees using AI say it has added to their workload and created challenges, as well as increasing expected productivity. What do you make of that?

I don’t know the study, so I can’t speak to it directly. But it’s reasonable to say that systems change is hard. People still need to learn how to use these things, and chatbots are not a natural way to get work done. So if people are told, “Now you have AI, you can get more work done,” but aren’t told how to do it or aren’t given clear policies or rules, it makes complete sense they are feeling stressed. 

I think there is both an undervaluing of what AI can do right now by a large amount, but also overvaluing how quickly those changes can happen in real world situations, without giving people proper training and tools.

What else do you hope people understand about this moment in AI? 

The main issue I want to try and impart is that we have agency right now, especially about how we're using AI in our jobs. I'm worried if the behavior is to ban it or not talk about it, that only we're going to only see bad use cases appear and none of the good ones. 

Everything is going to happen all at once. We need to be modeling good examples.

Buy your copy of the TIME100 AI issue here

More Must-Reads from TIME

Contact us at letters@time.com