Skip to content

Brazil Prohibits Meta from Leveraging Instagram Data for AI Training Usage

  • 5 min read
The article was last updated by verifiedtasks on July 4, 2024.

Brazil’s national data protection authority has barred Meta from leveraging Brazilian user data on Instagram and Facebook for AI training, citing concerns over potential harm and the protection of users’ fundamental rights.

Short Summary:

  • Brazil bans Meta from utilizing public user data for AI training.
  • Meta must amend its privacy policy within five days or face fines.
  • Human Rights Watch highlights concerns over unauthorized use of children’s images.

Brazil’s Autoridade Nacional de Proteção de Dados (ANPD) has decisively blocked Meta, the parent company of Facebook and Instagram, from using data shared by Brazilian users to train its artificial intelligence systems. This decision follows the company’s attempt to update its privacy policy to encompass the training of AI with user data posted publicly on its social media platforms.

“The imminent risk of serious and irreparable or difficult-to-repair damage to the fundamental rights of the affected data subjects,” was the key factor in ANPD’s determination, as stated in Brazil’s official gazette.

The ANPD’s ruling has significant implications given that Brazil represents one of Meta’s largest markets, with over 102 million active Facebook users and 113 million users on Instagram. Despite Meta’s assurance of compliance with local privacy laws, the regulatory body has highlighted concerns over the “excessive and unjustified obstacles” in Meta’s policy that hinder user’s ability to opt-out of their data being used.

“This is a step backwards for innovation, competition in AI development and further delays bringing the benefits of AI to people in Brazil,” said a Meta spokesperson in a statement, expressing the company’s disappointment in ANPD’s decision.

Meta has been at the forefront of using publicly available user data to enhance its generative AI capabilities. The company indicated in its May blog post that it intended to utilize public posts or photos, along with their captions, from its Brazilian users for training AI. Although Meta claimed that users could opt-out, the ANPD found the opt-out mechanisms to be overly complex and insufficiently transparent.

Human Rights Watch has raised additional concerns about the misuse of data, particularly involving children. An extensive study highlighted that databases like LAION-5B contain identifiable images of Brazilian children, sourced from various online platforms, which were used without the families’ knowledge to train AI models. This unauthorized use raises fears of potential exploitation through deepfake technologies and other means.

“This action helps to protect children from worrying that their personal data, shared with friends and family on Meta’s platforms, might be used to inflict harm in ways that are impossible to anticipate or guard against,” said Hye Jung Han, a researcher from Human Rights Watch based in Brazil.

The pressure on Meta doesn’t stop at Brazil’s borders. The company faced similar resistance in the European Union, leading to a temporary halt of its policy update in response to concerns from multiple European regulators. The delayed policy would have included public posts, images, and image captions from users over 18 for AI training, but excluded private messages.

“Meta was severely punished for being one of the few major tech companies to transparently update its policy on AI training,” commented Ronaldo Lemos, of the Institute of Technology and Society of Rio de Janeiro.

To mitigate these concerns, Meta is now under orders from the ANPD to revise its privacy policy immediately. The company has five working days to comply and demonstrate adherence to the new directives, failing which it would incur a daily fine of 50,000 reais (approximately $8,800). Such swift and severe actions by the Brazilian regulatory body send a clear message about the stringent stance on protecting digital privacy and user rights in the country.

Pedro Martins from Data Privacy Brasil underscores the discrepancy in data protection measures applied by Meta across different regions. While European users under the age of 18 are excluded from having their data used for AI training, similar protections were not extended to Brazilian minors. Additionally, the process to opt-out in Brazil is notably cumbersome, requiring numerous steps compared to the more straightforward process available to European users.

“In Europe, the steps users can take to prevent Meta from using personal information are more straightforward than in Brazil,” stated Martins, emphasizing the need for equal and transparent measures to protect users globally.

The controversy surrounding Meta’s data utilization practices has drawn significant attention from experts and advocates, urging for more robust regulatory frameworks. With the ongoing advancements in generative AI and the increasing capabilities of AI models, the importance of protecting personal data from unauthorized use remains critically paramount.

Meta’s confrontation with the regulatory bodies in Brazil is a part of a broader global trend where governments are grappling to keep Big Tech accountable and to ensure that AI innovations do not compromise user privacy. As the debate continues, Meta and other technology giants must navigate these complex regulatory landscapes to balance innovation with the ethical use of personal data.

The outcome of this regulatory dispute in Brazil might set a precedent for other countries in Latin America and beyond, underscoring the global importance of data protection and privacy in the age of AI.