The coming upheaval from artificial intelligence is quite real. A study released this week finds that 80% of the US workforce will have at least 10% of their work affected by GPTs, with about one in five seeing at least half their tasks impacted.
GPT, or so-called generative pre-trained transformer, refers to models that use machine learning techniques to generate text, from answering questions to writing entire essays, that mimics a human author.
“The influence spans all wage levels, with higher-income jobs potentially facing greater exposure,” concludes the study by researchers Tyna Eloundou, Sam Manning, Pamela Mishkin, and Daniel Rock, whose affiliations span OpenAI, OpenResearch, and Wharton. “…these models could have notable economic, social, and policy implications.”
Indeed.
What’s a human to do? The hysteria in the press, partly fueled by journalists connecting artificial intelligence to their own job security, often fails to give average workers their sense of place or agency within this transition. The considerations right now must combine ethics and economics, and recent history does not showcase the best of government or industry in either regard.
“What we should focus on is advancing human talent to use AI more effectively,” says Vilas Dhar, president of the Patrick J. McGovern Foundation, a philanthropy focused on protecting human social impact through the application of emerging technologies. “Let’s acknowledge exactly what AI is good at, but remember there are things that humans do that AI cannot. You have to actively design what infrastructure to put in place to fill those gaps.”
It’s not coming; it’s already here.
It’s worth reminding ourselves that this technological advancement has been in the works for decades, and many parts of our working lives already are run by artificial intelligence. Think tagging photos, facial ID at the airport, or your email trying to suggest what you want to say next. AI is listening to us on the phone or Zoom. A product called Gong records, transcribes, and analyzes sales calls; makes suggestions on where you can close deals quicker; and helps businesses forecast and make smarter decisions. Another company, HireLogic, applies a similar approach to recruitment by “listening” in on interviews to better capture a candidate’s strengths and other relevant information, such as notable skills or their concerns over an employer’s health plan.
“Even if we’re not recording the interview, you have to assume the candidate’s recording,” says HireLogic CEO Anirban Chakrabarti. “We’re living in a society, in a world, where everything that we do is in some way, shape, or form being tracked and covered.”
He warns that while the use of AI is still in a nascent stage, “we’re going to move to a world where AI will be a part of all our conversations.”
What changed over the last few months to create this explosion and this sense of inevitability with the robots joining us for, well, everything? For one, access. On Nov. 30, ChatGPT launched as a prototype and quickly created a frenzy for being detailed, articulate, instant—and easy to use.
“This accessibility sparks creativity because we’re no longer limited by the imagination of the tech community,” says Rumman Chowdhury, a Responsible AI fellow at Harvard University and CEO of Parity Consulting. “Net, this is good until we start looking at… who benefits and how?”
Job displacement is real—and hits Black and brown communities harder.
Another study to be released this morning by the Career Institute at the University of Phoenix details how different groups respond to this moment. Whereas 36% of all Americans surveyed say advancements in technology (e.g., automation, AI, robots) will negatively impact their career opportunities, that sentiment is higher among Gen Z (41%) and Millennials (41%) than Gen X (34%) and Boomers (28%). And Black (42%) and Latinx Americans (40%) see more negative impact than White Americans (34%).
There’s irony there, notes Danny Rojas, executive director of All Star Code, a nonprofit creating economic opportunity for young men of color. “The promise of robots, AI, and advanced tech is to bring us (‘the humans’) closer to simplicity, intelligence, and abundance in our daily lives,” he says. “Yet much of the data used to train machine learning algorithms do not consider ethnicity or race, perpetuating racial bias and inequity, with a disproportionate impact on communities of color.”
On the bright side, the ubiquity of AI and its application to daily life might be helpful for understanding the technology in the first place. “Easier access to these models via more user-friendly interfaces can have the effect of improving inclusivity by allowing for people who may not be native programmers,” says Chowdhury. “Training and reskilling become easier when we have AI-enabled assistants to help conceptualize, code, and visualize our materials. If anything, this highlights where the real problem is.”
All of us need to learn to redo our jobs—and improve our BS radar.
Where are the jobs? Keith Peiris, co-founder and CEO of the AI-powered storytelling platform Tome, rattles off where he’s been hiring most: machine learning and engineering, product, and design teams. Tome described itself as more similar to Instagram than PowerPoint, allowing “users to skip time-intensive slide design and get straight to the story.” Peiris also noted a shift in engineering talent away from apps and into AI and machine learning.
“On the level of individual knowledge workers, who are typically compensated for overall output vs time worked, AI presents an opportunity to either work more efficiently and work less, or to work more efficiently and be even more productive overall,” he says. “Either can be a net good for everyone.”
Some of the promise of AI feels akin to social media when it was a force for good (think #MeToo, #BlackLivesMatter and the general democratization of information) yet also emerged into a symbol of depravity (think the push to scale and profit, polarization, cyberbullying, and harassment).
Thus, the cries to proceed with caution are increasing. “Every entity that benefits from ChatGPT, reputationally, financially…. should at the very least teach people about ChatGPT, to empower them back, to redistribute. We should have legal mandates around this,” says Julia Stoyanovich, professor and director of the Center for Responsible AI at NYU.
She cautions that the goal of many programs “is not to generate new text that is accurate. Or morally justifiable. It’s to sound like a human.” And so it’s entirely appropriate to treat them that way and call them out, Stoyanovich suggests, for what they sometimes are: “Bullshitters.”
Humans, their judgment and their humor, for the win.