I’m on the phone with a startup executive roleplaying as a frustrated customer, and a note along with a speedometer icon appears on my computer screen: Speaking slowly — You are speaking slower than usual. Try increasing your speaking pace. I speed up, and the message disappears, only to be replaced with another: Continuous Speaking — Finish your thought. Ask an open-ended question. Later, as the exec struggles to find the date of a made-up transaction, the software I’m using detects the strained note in his voice, and again decides I should intervene. A new message appears, this one accompanied by a pink heart: Empathy Cue — Think about how the customer is feeling. Try to relate.
This is Cogito, an artificial intelligence program designed to help customer service workers communicate more clearly, empathize with frustrated callers, and improve their overall performance. Cogito listens to the tone, pitch, word frequency and hundreds of other factors in customer service conversations. When it detects something is wrong — an irritated customer, a call center agent taking too long to respond, or who sounds bored, tired, irritated, rushed or otherwise unpleasant — it displays a notification on the agent’s computer telling them to slow down, speed up, stop talking, start talking or try to sound more sympathetic.
Cogito has been rolled out to more than three dozen call centers across the U.S. over the past year, with clients like healthcare giants Humana and MetLife. The company behind the software, Cogito Inc., has raised more than $70 million from sources including Goldman Sachs, OpenView Venture Partners, and Salesforce Ventures (TIME is owned by Salesforce CEO Marc Benioff and Lynne Benioff). “We were 45 people a year ago,” says Cogito CEO Josh Feast, whose company now has 150 employees. “We’re growing really fast.”
For anyone who’s spent hours of their lives stuck on the phone with an unhelpful or downright rude customer service agent, Cogito may seem like a godsend. And while it sounds like the digital version of a pushy call center manager, customer service agents using the software who spoke with TIME generally said they found it helpful. But Cogito is indicative of a far bigger and more unsettling trend that extends well beyond call centers: artificial intelligence, once seen as a tool largely under our control, is beginning to tell humans what to do.
The research behind Cogito dates back to at least the early 2000s, when Alex “Sandy” Pentland, a computational social scientist at the Massachusetts Institute of Technology, was experimenting with ways to track and quantify human interactions. His experiments in what he called “reality mining” included using cell phones to plot the group dynamics of 100 MIT students and faculty, and attaching sensors to office workers to capture non-linguistic “speech features,” which the researchers said correlated to conversational “social signals” like enthusiasm or persuasiveness. Pentland distilled that work in his 2008 book Honest Signals, which argues that humans, as primates, display certain universal vocal quality “signals” removed from our actual words. When understood, he claimed those signals can reveal the underlying dynamics of human interactions — and even predict their outcomes.
In 2007, Pentland co-founded Cogito Inc. with Feast, a former consultant and software developer who attended Pentland’s courses while at MIT’s business school. The startup received government funding to experiment with voice analysis software designed to diagnose mental illness in veterans returning from combat. But it soon became apparent that there was other commercial potential in AI software that could determine a speaker’s state of mind from analyzing their voice. The mental health unit was spun off into anther venture, while Cogito focused on developing software for businesses.
Feast describes the company’s tech like this: “As primates, we signal our engagement, our level of distress, our attitude to interactions though the way that we interact.” The system examines those signals to determine how well a given interaction is going. When it senses something indicating a less-than-ideal conversation — an extended silence, say — the software gives a customer service agent what Feast calls a “behavioral nudge” telling them to do something differently. Between calls, agents can look at stats on their recent conversations, while managers receive regular assessments of their employees, like an athlete’s stat sheet.
At MetLife, where Cogito has been rolled out to 10 U. S. call centers over the past year, managers say that the program improved first call resolution metrics by 3.5% and customer satisfaction by 13%, and helped agents (who take an average of 700 calls a week) to have more “human” conversations. One employee says that Cogito helped her cut her average call time nearly in half, while another said that it helped her slow down when she was speaking.
“There are always people that are hesitant to change,” says Carrie-Ann Liquore, a former preschool teacher who’s now using Cogito in her role as a MetLife short-term disability case manager. “But the way that is was rolled out to us from our management was constantly reminded that it was a tool … to help us improve.”
Of course, in addition to helping workers have more successful calls, Cogito also gives managers vastly expanded insight into — and control over — their employees. Working at a call center has long been about meeting performance goals, and the agents who spoke with TIME seemed unbothered by that enhanced oversight. But the idea may prove more controversial in fields where workers aren’t used to being so closely tracked. For their part, executives using Cogito try to frame it as a helpful tool rather than a surveillance technology. “People don’t see it as big brother,” says Chris Smith, MetLife’s head of global operations. “They see it as another way we can help them do their jobs better.”
MetLife managers say Cogito’s performance data doesn’t factor into employee reviews, while Feast says the program’s tracking features aren’t meant for punishing call center workers, but rather to help organizations spot problems and improve quickly. But whether that will remain the case, or hold true at other companies using the software, is less clear. Meredith Whittaker, a distinguished research scientist at New York University and co-director of the AI Now Institute, argues that Cogito could become another example of AI software that’s difficult for people to understand, but ends up having a massive impact on their lives regardless. Firms like HireVue sell AI systems designed to help job recruiters screen applicants through video submissions. Pymetrics makes software to judge applicants based on their performance in video games. Amazon is reported to have a shipping center system that tracks productivity and automatically fires workers if they fall below certain targets.
“What [Cogito] appears to do is to allow intensive surveillance of workers and a kind of a priori justification for potentially firing or disciplining workers based on the assessment of a smart machine that neither the worker nor the manager is able to contest or validate,” says Whittaker.
And as AI software becomes increasingly advanced, it could make it harder for many to get a job in the first place. AI is on track to displace as many as hundreds of millions of workers over the next decade or two, by some estimates. That number is often debated, but many observers agree that some level of mass employment disruption is inevitable, and it’s unclear what the people affected are supposed to do. Given the pace of advancements in AI recently, it’s not far fetched to imagine a sort of mega-Cogito handling at least basic requests in lieu of human workers. Similar systems, like Google’s Duplex, can already call up restaurants to make reservations on behalf of a user (though it still reportedly needs a bit of human help sometimes.)
Cogito scientists say that potential future is still at least a decade away. And today’s call center workers don’t seem concerned. Asked if she ever worries about being replaced by AI, Ashley Moore, a 27-year-old MeLlife total absence management case specialist, laughs. “No, never,” she says. “People want to speak to that real person.” If Cogito actually helps those real people have more empathic conversations with customers, that just might hold true.
According to Cogito’s executives, the company makes “coaching” software. That language perfectly drives home what’s new (and potentially unsettling) about the technology. We’re used to thinking of technology as something we direct. But this AI directs you. And it’s becoming increasingly clear that other types of AI, whether it’s Facebook’s News Feed algorithm or YouTube’s recommendation engine, are similarly driving us in directions we might not otherwise travel. If AI can “coach” the way we speak, how much more of our lives may soon be shaped by AI input? And what do we really understand about the way that influence works?
In the case of Cogito, spending eight hours a day, five days a week under AI direction may have effects beyond how someone speaks at work. Moore, who’s a fan of Cogito’s software, says that after she was using the program for about a year, her boyfriend began noticing a difference in the way she spoke at home. She was speaking more directly in all her conversations, without all the “extra fluff.” She didn’t seem to mind the change. But some might argue the “fluff” is essential — separating human conversations from the binary, efficient communication of machines.
Speaking a little differently may seem relatively benign. Yet if AI monitoring and direction becomes more sophisticated and widespread, we may find ourselves less certain of who we really are. Perhaps no one has explored that concept more deeply than Chris Dancy, a heath tech executive turned professional speaker. In 2010, Dancy set out on a profound behavioral experiment, using hundreds of sensors to quantify and track his life and send him streams of notifications coaching him to be healthier, happier, and more productive. He had notifications for sleeping, exercising and avoiding caffeine. When he went to buy groceries, a notification would remind him to strike up a conversation with the cashier. When he spoke too aggressively on conference calls, the lights in his house would automatically dim to remind him to lower his voice.
Five years later, Dancy had lost over a hundred pounds, quit smoking, drinking, and doing drugs, and was making half a million dollars a year. But something was wrong. After years of intense, technologically-enforced change, he began to lose track of his identity. He says he no longer recognized the way he looked or how he spoke. He felt like he was “disappearing,” he says.
“Everything was perfect,” Dancy says. “But I wanted to die.”
That mysterious artificially intelligent forces are modifying our everyday behavior is far from a new idea. Digital advertisers have long used AI to track prospective customers across the web, cajoling them into clicks and purchases. YouTube’s algorithms, optimized to keep viewers glued to their screens, are prone to lead some people into rabbit holes of makeup tutorials or car crash compilations, and others into webs of increasingly extreme political content — a problem that’s becoming an existential crisis for the site. In the 2016 election, data gleaned from Facebook helped Cambridge Analytica lock onto voters’ individual psychographic profiles in an attempt to turn out the vote for Trump.
The workplace is an obvious testing ground for “coaching” software like Cogito. But like other enterprise software, it could soon spread well beyond call centers and other offices and into our daily lives. At least, that’s the not-too-distant future predicted by techno-philosophers like Andy Clark, who argues we will soon become part of what he calls an “AIome,” or an extended web of external AI processes that will feed us information and come to define our identities. Such a prediction generates some tough questions. Will companies like Google and Facebook, who already have massive power over what billions see on the internet, be able to control even more of our everyday behavior, even offline? And if AI can change how we act down to the very way we speak, how much real agency will any of us have left?
At Cogito, some version of Clark’s future may be just around the corner. Skylar Place, Cogito’s chief behavioral scientist, thinks that software to direct all sorts of business calls will be standard within three years. That AI would have insight into both speaking style and conversational content, and be able to notify a worker if they’re speaking off the meeting agenda or automatically mute a serial interrupter. Such software is unlikely to stop at the meeting room, Place says. “There’s a future where it becomes part of our normal day-to-day,” says Place. “In conversations with parents, with spouses, and in preparing for job interviews.” Internally, Cogito already uses an application to coach everyday employee interactions, but it hasn’t rolled it out as a product for other companies. “It’s not that it’s not a technical possibility,” says Feast, the CEO. “It’s just we’re not quite yet sure if the general population is ready for this.”
More Must-Reads from TIME
- Where Trump 2.0 Will Differ From 1.0
- How Elon Musk Became a Kingmaker
- The Power—And Limits—of Peer Support
- The 100 Must-Read Books of 2024
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- FX’s Say Nothing Is the Must-Watch Political Thriller of 2024
- Merle Bombardieri Is Helping People Make the Baby Decision
Write to Alejandro de la Garza at alejandro.delagarza@time.com