ChatGPT is making its way into workplaces, and organizations are grappling with what policies to put in place and how to prepare their workers.

We recently spoke with George Westerman, a senior lecturer at MIT’s Sloan School of Management, founder of the university’s skills-focused Global Opportunity Initiative, and the co-author of Leading Digital: Turning Technology Into Business Transformation, about the AI-related skills employers should be prioritizing right now. Here are excerpts from our conversation, edited for length and clarity:

How much should employers be approaching AI literacy as an employee skill?

We want to think about levels of knowledge. You want to train people with enough knowledge to understand, without making them go overboard on some of the technical concepts. Some of your technical people may need to know the depths of how AI works, while the people who are non-technical are going to have to be much more understanding of the AI-enabled tools: what are the limits there, where are the biases in there. So not necessarily training them on AI, but training them on the tools instead. For example, if I write customer-service emails all day, I don’t need to know how AI works, but I probably need to know how the new tools coming out of AI are going to help me design those emails easier. It’s like how some of us may need to know Excel, and others need to be really expert at pivot tables, but not everybody.

The other thing they’re going to need more of is to brush up on the things that are uniquely human. The circle of what is uniquely human gets smaller and smaller over time, but certainly there’s empathy and there’s interaction and there’s creativity. If we’re still training skills for routine jobs, that’s not good for the employer or employee. What you want to do is more critical thinking, higher-order thinking, because that’s something where the humans can continue to contribute to the story as advanced automation takes over their routine stuff.

Sign up for Charter's newsletter to get the handbook for the future of work delivered to your inbox.

Presumably critical thinking will also become more important in learning to evaluate AI-generated information…

For everything that organizations can do to make their products and their marketing better, the bad people out there will find ways to make their products and their processes better, too. So you can expect that phishing emails will become more personalized and more professional-looking. You can expect that the people calling and trying to fake you into giving them passwords will start sounding an awful lot like people who are trusted in the organization. You can expect, moving beyond the text world, that the deepfake videos and voices will become pretty compelling. And so what that means is we need to help people be more aware of these threats that are coming down the line. It’s going to become more and more difficult.

You’ve done research around skills and technological change that led you to develop your human skills matrix. Can you explain what that is?

We kept hearing that you hire for hard skills, you fire for soft skills. And so of course, typical professor, I’m like, ‘Well, how do we define soft skills?’ We went out and we found these 41 different frameworks. There was commonality among them, but there were also some things that just felt like they weren’t quite there yet. So we synthesized those 41 frameworks, we did a lot of interviews with experts, and we came up with our human skills matrix. So the top is doing, the bottom is leading, the left is me, the right is other. Just thinking in terms of those four levels—how I think, how I interact, how I manage myself, how I lead others—can be a really powerful way of thinking about what these human skills are. We say these are the skills that can help you thrive in a time of rapidly advancing technological change.

Do you see AI as a tool that could help with the development of these skills—for example, by giving employees a space to practice difficult conversations?

Absolutely. We haven’t known how to code it, but we’re starting to look into it now. A lot of these human skills, the best way to train or assess them is through role play, and we’ve developed role plays for some of these things, but that requires having a human actor involved. Can the computer be the other actor? Now you don’t need to get two people together. You can just practice these things whenever you want.

Some organizations are currently in the middle of creating policies around workers’ use of ChatGPT and similar technologies. What would be your recommendation as they figure out what their policy should look like?

The first response to anything unknown is always to say, ‘No, we don’t want to do that.’ So the first and wrong answer is just to block it. If you block it in-house, it’ll just be used out of the house, and then you won’t have any idea what’s happening there. Number two, you might want to set up some policy just to say, ‘Here are the dangers that can happen. Here’s acceptable, not acceptable.’ Get that out pretty quickly so that people have some idea what the right way is. And at that point, once you get better at it, then you can start thinking about other policies, other ways you might use this.

This stuff is really pretty early. I think it’s too early to create a real GPT policy yet, other than to say no, and I think no is the wrong answer. You just want to help people get comfortable with what’s possible as you get more clear on what specifically you want to do. You want to put out guidelines, help people understand where the swim lanes are, so they can at least understand if they’re going too far one way or the other. And then put in the more specific policy guidance once we all know what we’re doing. That could be a while. Certainly it’s not in the next couple weeks or months.

Read a full transcript of our conversation.

Read more from Charter

The handbook for the future of work, delivered to your inbox.

Subscribe
EDIT POST