It’s becoming clearer that the next big wave of changes to how and where many of us work will be sparked by new applications of artificial intelligence.
Until now, the impact of AI on our jobs has been abstract for most workers, something that seemed to be out on the distant horizon. But the recent acceleration of applications around so-called generative AI is showing us how quickly and broadly our work will change. Some real-world examples:
- A doctor is having AI write medical letters to insurance companies requesting approval for treatments.
- Engineers are using AI to generate computer code, accelerating development time by 20% in one experiment Deloitte conducted.
- A San Francisco man used AI tools to write and illustrate a children’s book, which he published on Amazon, in a weekend.
The latest wave of AI tools generally doesn’t require technical skills and includes image generators, video editors, podcast audio editors, email-reply generators, word processors, computer coding assistants, research-paper summarizers, marketing copywriters, and much more. OpenAI’s Dall-E and ChatGPT services give you a quick sense of the power behind these tools. What makes these generative AI applications both concerning and exciting is that they can generate seemingly original content based on simple prompts from users.
In the coming months and years, these capabilities will undoubtedly improve and make their way further into tools knowledge workers are already using, such as word processors and email services. As an example, at Charter, we’re already using AI tools every day to transcribe audio recordings and create illustrations to accompany articles. Other immediate applications include drafting memos and emails, doing research, and generating ideas, as AI presents users with novel ideas and information we might not have come up with on our own.
There remain many vital gaps and concerns. Generative AI is trained on large corpuses of information, which encode the biases of that material. It usually doesn’t acknowledge or compensate creators for the work that it is built on—and changes to copyright law are urgently needed to recognize this. It’s prone to errors, some very obvious and some less so, and can be used to make so-called “deep fakes” that deceive people. I worry deeply that misinformation and spam will make the web unusable when AI can generate infinite quantities of content in this way. Teachers are rightly anxious that essay assignments—which can be completed instantly by free AI tools with impressive results—are no longer a reliable way to test for mastery of a subject.
I’m writing this on an AI-powered word processing service called Lex, which can suggest the next paragraph for you. To give you a sense of its power, it proposed this:
But the upside of generative AI is undeniable. It’s making us more efficient and creative, and opening access to new tools and new forms of expression. As with any technology, it will be up to all of us to slow down and think critically about how we use them and the effects they may have into the future.
That’s one point of view, for sure.
Techno-optimists predict that the jobs of most knowledge workers will change, though few will be entirely eliminated. As with most things, the pace of the change has critical implications. Will it be gradual, and akin to how Google search and the explosion of information on the web have altered how we do our jobs? Or will whole categories of workers, such as commercial illustrators and copywriters, effectively lose their livelihoods from one year to the next?
My expectation is that many of us will begin—and are already—encountering content on the web and in our workplaces that has been created by AI tools, without necessarily realizing. Users of these tools risk creating more unnecessary content in our already information-overloaded workplaces, lowering the bar for effort needed to create that extra email, memo, sales pitch, or presentation.
But these AI tools will surely streamline our work and take some of the annoyance and time out of drafting memos, replying to emails, and writing computer code. They’ll help us more quickly digest information such as research reports and make powerful multimedia editing tools usable through simple text commands. They’ll likely find rapid use in performance evaluation and project management, activities that rely heavily on generating and collating written prose.
The AI tools can require different skills and aptitudes of workers. One obvious one is being a good editor, deciding what to keep and what needs to change. With the AI text generators, you need to fact-check and make sure it all makes sense. The skill required is no longer generating the text but—presented with different plausible AI-generated strings of words about the topic—discerning whether any option is actually correct or modifying them to be so.
With image generators, you need to decide which image works best, tweak queries to improve them, and spot visual flaws (like misshapen hands). Code generators require that you review the code to make sure it does what it’s supposed to and is easy to understand in the future. The skills are all—at least subtly—different from what’s required of humans to be good at generating content from scratch on their own. (The “prompt engineer” is a new role in this vein.)
Theoretically, generative AI tools could streamline our work so that we can work fewer hours and reduce our burnout. (That was the sort of future economist John Maynard Keynes imagined in 1930 when he predicted a 15-hour workweek.)
As Lex’s AI says, it’s up to us to decide how this plays out. But I have little doubt all knowledge workplaces will need to begin wrestling with the impact of generative AI in 2023.