Brazil’s data protection authority, Autoridade Nacional de Proteção de Dados (ANPD), has temporarily banned Meta from processing users’ personal data to train the company’s artificial intelligence (AI) algorithms.
The ANPD said it found “evidence of processing of personal data based on inadequate legal assumptions, lack of transparency, limitation of data subjects’ rights and risks to children and adolescents.”
The move follows the social media giant’s updated terms of service that allow it to use public content from Facebook, Messenger and Instagram for AI training purposes.
A recent report by Human Rights Watch found that LAION-5B, one of the largest image-text datasets used to train AI models, contained links to identifiable photos of Brazilian children, exposing them to malicious deepfakes that could expose them to further exploitation and harm.
Brazil has about 102 million active users, making it one of the largest markets. The ANPD noted that the Meta update violates the General Law on the Protection of Personal Data (LGBD) and poses “an imminent risk of serious and irreparable or difficult-to-repair harm to the fundamental rights of the data subjects.”
Meta has five business days to comply with the order, or face a daily fine of 50,000 reais (about $8,808).
In a statement shared with The Associated Press, the company said the policy “is consistent with privacy laws and regulations in Brazil” and that the decision constitutes “a step backward for innovation, competition in AI development and further delays in bringing the benefits of AI to Brazilians.”
The social media company received a similar backlash in the European Union (EU), forcing it to suspend plans to train its AI models using data from users in the region without obtaining explicit consent from users.
Last week, Meta’s global affairs chairman Nick Clegg said the EU was “no longer a breeding ground for innovation and world-class businesses”, adding that “the era of generative AI offers a game-changing opportunity”.
The development comes as Cloudflare released a new “one-click” tool that prevents AI bots from scraping content from its customers’ websites to train large language models (LLMs).
“This feature will be automatically updated over time as we see new fingerprints of incriminated bots that we identify as widely crawling the web for model training,” the web infrastructure company said.