Interacting with ChatGPT can feel magical. The chatbot has a friendly, helpful persona. And if you have ever tried to prompt it to elicit violent, hateful, or sexually explicit content, you’ll know that it’s very good at not rising to the bait.
More from TIME
That’s no accident. A year before ChatGPT’s blockbuster launch, a team of Kenyan workers were drafted to help iron out its flaws. The problem: OpenAI’s large language models—the AI technology ChatGPT was built upon—would often spew racism and other types of inappropriate content. (It was a side effect of being trained on masses of text from the internet, which is often a nasty place.) To help OpenAI address the issue, Richard Mathenge and dozens of his colleagues at the outsourcing company Sama spent months reading and classifying different examples of toxic material—hate speech, violence, descriptions of sexual abuse—to train ChatGPT to avoid parroting it. The work, which paid less than $2 per hour, took its toll. “We were dealing with serious trauma,” Mathenge, 38, says.
In May, Mathenge went public as one of 150 African AI workers who voted to establish the first African Content Moderators Union, a cross-company effort to win better working conditions for Big Tech workers in Kenya, a hub for outsourced work of this type. And in July, Mathenge, along with three former ChatGPT colleagues, filed a petition to the Kenyan Parliament, calling on lawmakers to investigate Big Tech’s practice of outsourcing in the country, and to legislate relief. “It was our obligation to reach out to Parliament,” Mathenge says, because the human workers behind AI “are being treated as trash.”
- Cybersecurity Experts Are Sounding the Alarm on DOGE
- Meet the 2025 Women of the Year
- The Harsh Truth About Disability Inclusion
- Why Do More Young Adults Have Cancer?
- Colman Domingo Leads With Radical Love
- How to Get Better at Doing Things Alone
- Michelle Zauner Stares Down the Darkness