Illustration by TIME; reference image courtesy of Richard Mathenge

Interacting with ChatGPT can feel magical. The chatbot has a friendly, helpful persona. And if you have ever tried to prompt it to elicit violent, hateful, or sexually explicit content, you’ll know that it’s very good at not rising to the bait.


More from TIME


That’s no accident. A year before ChatGPT’s blockbuster launch, a team of Kenyan workers were drafted to help iron out its flaws. The problem: OpenAI’s large language models—the AI technology ChatGPT was built upon—would often spew racism and other types of inappropriate content. (It was a side effect of being trained on masses of text from the internet, which is often a nasty place.) To help OpenAI address the issue, Richard Mathenge and dozens of his colleagues at the outsourcing company Sama spent months reading and classifying different examples of toxic material—hate speech, violence, descriptions of sexual abuse—to train ChatGPT to avoid parroting it. The work, which paid less than $2 per hour, took its toll. “We were dealing with serious trauma,” Mathenge, 38, says.

In May, Mathenge went public as one of 150 African AI workers who voted to establish the first African Content Moderators Union, a cross-company effort to win better working conditions for Big Tech workers in Kenya, a hub for outsourced work of this type. And in July, Mathenge, along with three former ChatGPT colleagues, filed a petition to the Kenyan Parliament, calling on lawmakers to investigate Big Tech’s practice of outsourcing in the country, and to legislate relief. “It was our obligation to reach out to Parliament,” Mathenge says, because the human workers behind AI “are being treated as trash.”

More Must-Reads from TIME

Write to Billy Perrigo at billy.perrigo@time.com.

Sougwen Chung
Yann LeCun
Abeba Birhane
Audrey Tang
Grimes
EDIT POST