Gig Workers Behind AI Face ‘Unfair Working Conditions,’ Oxford Report Finds

6 minute read

Artificial intelligence is booming. And with it, so are the digital labor platforms used by many AI companies to employ human gig workers. Those people perform the vital but often unseen labor of generating or labeling the masses of data that AI systems heavily rely on—often as part of efforts to make AIs more reliable and less biased.

Even as these workers take on the vital task of making modern AI safer, the companies that employ them are uniformly failing to meet even a basic threshold of labor rights standards, according to a new report from the University of Oxford’s Internet Institute, shared exclusively with TIME.

Researchers assessed 15 digital work platforms—among them Amazon Mechanical Turk, Scale AI and Appen—and found that all of them were “still far from safeguarding basic standards of fair work,” according to the report. “While the run for AI deployments gets public hype and momentum, workers behind the design, building and testing of these technological solutions, unfortunately, still face enormous challenges and experience unfair working conditions,” the report says.

Read More: Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic


More from TIME


The basis for the report was a survey of 752 workers in 94 countries carried out by researchers from Oxford’s Fairwork team, as well as formal interviews with platform managers. The report’s authors then gave each platform a score out of ten based on five principles: fair pay, fair conditions, fair contracts, fair management, and fair representation; with a maximum of two points for each principle. Four platforms (Amazon Mechanical Turk, Workana, Microworkers, and Freelancer) scored 0 points. None of the platforms reviewed scored more than 5. A score of 10 out of 10, researchers note, “simply means that [a] company is complying with the bare minimum.”

“We have very world-renowned platforms that are still among the worst scores,” says Jonas Valente, the lead researcher on the so-called “cloudwork” project. “We still have a very big problem with platforms that are not acknowledging the need to improve their workers’ conditions.”

One finding of the report is that workers spend much of their time carrying out what the authors call “unpaid labor.” The report found that 250 workers on the five platforms most commonly used by AI companies to find gig workers spent 26.8% of their time carrying out what it calls “unpaid tasks,” including looking for work, taking unpaid tests, and applying for work. Taking that unpaid time into account, those workers earned an average hourly wage of $2.15 per hour—though the report notes that this pool of workers hailed from 51 countries where the cost of living varies significantly. “Still, the findings shed light on the persistent problem of low pay on the platforms,” the report says.

“Online outsourcing is a model that is spreading through the economy,” Valente tells TIME. “The workers performing those tasks need to have their rights ensured.”

Read More: 150 African Workers for ChatGPT, TikTok and Facebook Vote to Unionize at Landmark Nairobi Meeting

The platforms surveyed as part of the project were: Amazon Mechanical Turk, Appen, Clickworker, Comeup, Elharefa, Fiverr, Freelancer, Microworkers, PeoplePerHour, Prolific, SoyFreelancer, Scale AI (including its subsidiary Remotasks), Terawork, Upwork, and Workana.

Amazon, Upwork, and Fiverr disputed the suggestion that minimum wage standards should apply to work they described as flexible and voluntary, through their individual spokespeople. In a statement, an Amazon spokesperson said: “MTurk is a marketplace where people asking for help with a task can say how much they’re willing to pay for that help. Anyone can then decide whether or not they think that’s a fair price and decide whether to accept. If they accept, their compensation depends on the price set for each task, the number of tasks they complete, and the quality of their work. Most participants see MTurk as part-time work, and they enjoy the flexibility to choose the tasks they want to work on and work as much or as little as they like. MTurk continues to help a wide range of people earn money when and how they choose to, and contribute to the growth of their communities.”

An Upwork spokesperson said: “Freelancers who use Upwork decide how they want to engage with their clients, including negotiating their own contract terms, setting their own rates, and submitting proposals for the projects they are most interested in.” A Fiverr spokesperson said: “Fiverr is not an employment platform, but a two-sided marketplace… Consequently, parameters such as wage requirements and managing allocation of work are not implemented in order to give freelancers full control to determine their rates, projects, and workload.”

In a statement Phelim Bradley, the CEO of Prolific, welcomed Fairwork’s report but contested some of its findings. “Prolific is a research platform, and not employment, or gig work. Unlike other research platforms, Prolific is specifically designed to make sure that participants do not participate in research without being paid,” Bradley wrote. “While we respect their independence, some of Fairwork’s scoring criteria do not seem to fit our use case.”

Scale AI declined to comment. Clickworker, Elharefa, Freelancer, Microworkers, PeoplePerHour and Workana did not respond to requests for comment.

Read More: Big Tech Layoffs Are Hurting Workers Far Beyond Silicon Valley

Several platforms acknowledged the report had found deficiencies and pledged to change. A spokesperson for Appen said it had recently overhauled its leadership team, adding “we deeply value our human contributors and hear them loud and clear that our contributor experience needs improvement.” A spokesperson for SoyFreelancer said the report was a “good initiative” and said it had “taken note of the aspects to improve, in order to provide a higher quality service for our users.”

Two of the highest-scoring platforms touted their scores, despite the report noting that even a score of 10 out of 10 would only mean basic standards had been met. Terawork said its rating of 5 out of 10 “further validates our silent work over the years.” Comeup, which also scored 5 out of 10, said it was “thrilled to have achieved a score that places us among the platforms with the best working conditions,” adding that it has already implemented changes based on the feedback it had received. “We still have work to do to achieve the ideal working conditions for our community,” the spokesperson said.

More Must-Reads from TIME

Write to Billy Perrigo at billy.perrigo@time.com