Meta Rejects EU AI Act Code, Citing Legal Concerns

Meta’s Stance on the EU AI Act Code of Practice

In a significant move that highlights the ongoing tension between major tech companies and regulatory bodies, Meta has announced that it will not be signing the European Union’s AI Act code of practice. This development comes after the EU passed its AI Act last year, aimed at establishing a framework for the responsible use of artificial intelligence across member states.

Background of the EU AI Act

The AI Act was designed to create a set of rules governing the deployment and operation of AI technologies within the EU. As part of this initiative, a code of practice was introduced for ‘general-purpose AI model providers’ in an effort to enhance transparency and accountability in the rapidly evolving landscape of AI.

Key Provisions of the Code

Companies that choose to sign the code are expected to adhere to several key provisions, including:

  • Only reproducing and extracting lawfully accessible copyright-protected content when crawling the web.
  • Complying with rights reservations.
  • Mitigating the risk of producing copyright-infringing output.
  • Designating a point of contact for compliance issues.
  • Allowing for the submission of complaints regarding non-compliance.

Meta’s Concerns

Meta’s chief global affairs officer, Joel Kaplan, articulated the company’s rationale for opting out of the code, stating that the regulations introduce a number of legal uncertainties for model developers. Kaplan expressed concerns that the measures outlined in the code extend beyond the original scope of the AI Act, potentially stifling innovation and development in the AI sector.

He emphasized, “this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them.” This statement underscores the ongoing debate surrounding the balance between regulation and innovation in the AI space.

Implications for the AI Landscape

Meta’s decision not to sign the code could have far-reaching implications for the AI industry in Europe. As one of the leading companies in the sector, Meta’s stance may embolden other tech firms to voice their concerns about regulatory frameworks that they perceive as overly restrictive.

Additionally, the company’s refusal to comply with the code raises questions about the effectiveness of regulatory efforts to ensure responsible AI usage and the potential impact on the development of AI technologies in the region.

Broader Context

Meta’s announcement comes in a year where the company has faced various accusations, including allegations of using a substantial library of pirated ebooks for training its AI models. These controversies add another layer of complexity to the dialogue surrounding ethical AI development and the responsibilities of tech giants in the digital age.

As the regulatory landscape evolves, the interaction between companies like Meta and governing bodies will be crucial in shaping the future of AI technology in Europe and beyond.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...