Meta Has Been Ordered to Stop Mining Brazilian Personal Data to Train Its AI

3 minute read

Brazil’s national data protection authority has ordered Meta to halt the use of data originating from the country to train its AI models.

Meta’s current privacy policy enables the company to use data from its platforms, including Facebook, Instagram, and WhatsApp to train its artificial intelligence models. However, that practice will no longer be permitted in Brazil after its national data protection authority gave the company five days to change its policy on Tuesday.

Brazil said the company will need to confirm it has stopped using the data or face a daily non-compliance fine of $50,000 Brazilian Reals (almost $9000), citing “the imminent risk of serious and irreparable or difficult-to-repair damage to the fundamental rights of the affected data subjects."

Meta said it was “disappointed” with the Brazilian authority’s decision, saying it was a “step backward for innovation.”

“AI training is not unique to our services, and we’re more transparent than many of our industry counterparts who have been using public content to train their models and products,” the company tells TIME Wednesday, following the Brazilian authority’s decision.

The decision follows a report published in June by Human Rights Watch, which found that a popular dataset of images scrapped from online sources used to train image models, made by German nonprofit LAION, contained identifiable images of Brazilian children, which the report says places them at risk of deep fakes or other forms of exploitation. Human Rights Watch says they found 170 photos of children from at least 10 Brazilian states by reviewing less than 0.0001 percent of the images in the dataset.

Brazil is one of Meta’s biggest markets, with over 112 million Facebook users alone. In June at a conference in the South American country, Meta unveiled new AI tools for businesses on its WhatsApp platform.

The Brazilian authority said users were not sufficiently warned about the changes, and that the process for opting out was “not very intuitive.” Meta says their approach complies with local privacy laws, and that it will continue to address the Brazilian authority’s questions.

Brazil’s decision to stop Meta feeding user’s data into its AI models follows similar pushback in Europe. Last month, Meta delayed the launch of its AI services and paused plans to train its models on EU and U.K. data after receiving a complaint from the Irish privacy regulator. Meta is expected to push ahead with training in the U.S., which lacks federal online privacy protections.

Read more: Meta Faces Norwegian Complaint Over Plans to Train AI on User Images and Posts

This is not the first time Meta has found itself at odds with Brazilian authorities. In February, the company was barred from using its name in Brazil due to confusion with another company. Meta successfully overturned the decision in March.

More Must-Reads from TIME

Contact us at letters@time.com