Meta Rejects EU AI Act Code, Citing Legal Concerns

Meta’s Stance on the EU AI Act Code of Practice

In a significant move that highlights the ongoing tension between major tech companies and regulatory bodies, Meta has announced that it will not be signing the European Union’s AI Act code of practice. This development comes after the EU passed its AI Act last year, aimed at establishing a framework for the responsible use of artificial intelligence across member states.

Background of the EU AI Act

The AI Act was designed to create a set of rules governing the deployment and operation of AI technologies within the EU. As part of this initiative, a code of practice was introduced for ‘general-purpose AI model providers’ in an effort to enhance transparency and accountability in the rapidly evolving landscape of AI.

Key Provisions of the Code

Companies that choose to sign the code are expected to adhere to several key provisions, including:

  • Only reproducing and extracting lawfully accessible copyright-protected content when crawling the web.
  • Complying with rights reservations.
  • Mitigating the risk of producing copyright-infringing output.
  • Designating a point of contact for compliance issues.
  • Allowing for the submission of complaints regarding non-compliance.

Meta’s Concerns

Meta’s chief global affairs officer, Joel Kaplan, articulated the company’s rationale for opting out of the code, stating that the regulations introduce a number of legal uncertainties for model developers. Kaplan expressed concerns that the measures outlined in the code extend beyond the original scope of the AI Act, potentially stifling innovation and development in the AI sector.

He emphasized, “this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them.” This statement underscores the ongoing debate surrounding the balance between regulation and innovation in the AI space.

Implications for the AI Landscape

Meta’s decision not to sign the code could have far-reaching implications for the AI industry in Europe. As one of the leading companies in the sector, Meta’s stance may embolden other tech firms to voice their concerns about regulatory frameworks that they perceive as overly restrictive.

Additionally, the company’s refusal to comply with the code raises questions about the effectiveness of regulatory efforts to ensure responsible AI usage and the potential impact on the development of AI technologies in the region.

Broader Context

Meta’s announcement comes in a year where the company has faced various accusations, including allegations of using a substantial library of pirated ebooks for training its AI models. These controversies add another layer of complexity to the dialogue surrounding ethical AI development and the responsibilities of tech giants in the digital age.

As the regulatory landscape evolves, the interaction between companies like Meta and governing bodies will be crucial in shaping the future of AI technology in Europe and beyond.

More Insights

Chinese AI Official Advocates for Collaborative Governance to Bridge Development Gaps

An AI official from China emphasized the need for a collaborative and multi-governance ecosystem to promote AI as a public good and bridge the development gap. This call for cooperation highlights the...

Mastering Risk Management in the EU AI Act

The EU AI Act introduces a comprehensive regulation for high-risk AI systems, emphasizing a mandatory Risk Management System (RMS) to proactively manage risks throughout the AI lifecycle. This...

Switzerland’s Approach to AI Regulation: A 2025 Update

Switzerland's National AI Strategy aims to finalize an AI regulatory proposal by 2025, while currently, AI is subject to the Swiss legal framework without specific regulations in place. The Federal...

Mastering AI Compliance Under the EU AI Act

As AI systems become integral to various industries, the EU AI Act introduces a comprehensive regulatory framework with stringent obligations based on four defined risk tiers. This guide explores AI...

Mastering AI Compliance Under the EU AI Act

As AI systems become integral to various industries, the EU AI Act introduces a comprehensive regulatory framework with stringent obligations based on four defined risk tiers. This guide explores AI...

The Hidden Dangers of Shadow AI Agents

The article discusses the importance of governance for AI agents, emphasizing that companies must understand and catalogue the AI tools operating within their environments to ensure responsible use...

EU AI Act Compliance: Key Considerations for Businesses Before August 2025

The EU AI Act establishes the world's first comprehensive legal framework for the use and development of artificial intelligence, with key regulations set to take effect in August 2025. Companies must...

AI Governance: Bridging the Leadership Gap

As we advance into the era of intelligent machines, organizations are compelled to rethink leadership and oversight due to AI's capacity to make decisions and design strategies. The urgency for...

AI Governance: Bridging the Leadership Gap

As we advance into the era of intelligent machines, organizations are compelled to rethink leadership and oversight due to AI's capacity to make decisions and design strategies. The urgency for...