US Tech Giants Undermine EU’s AI Governance Efforts

How US Firms Are Weakening the EU AI Code of Practice

The EU AI Act presents the first comprehensive framework for governing artificial intelligence (AI). For the most powerful models, known as general-purpose AI (GPAI), a Code of Practice is currently being drafted to streamline compliance for a small number of leading AI companies. This Code is developed through an iterative process involving nearly 1,000 stakeholders from industry, civil society, and academia, led by 13 expert-chaired working groups.

The final text of the Code is expected to be published by August 2025. However, as the process nears completion, the European Commission has granted privileged access to a select few leading US companies, who advocate for a diluted version of the Code. This raises concerns regarding the legitimacy of the process and undermines the EU AI Act’s intent to serve the interests of European citizens.

An Inclusive Process, But for Whom?

The drafting of the Code of Practice has been unprecedented in its inclusivity. GPAI providers were given a special role from the beginning, but as the process concludes, the critical question remains whether the influential US companies will recognize that the rules for GPAI are a matter of public interest and cannot be dictated solely by them. By lobbying the European Commission to prioritize their interests, these companies jeopardize the entire process and compromise their credibility as responsible corporate citizens.

Ironically, the industry conflates weak regulation with innovation, hoping to benefit from the Commission’s recent push to position the EU as a global leader in AI. However, this perspective is fundamentally flawed. Critics argue that the real issues in Europe are not stringent regulations but rather market fragmentation and a lack of AI adoption.

The Code Has Become Overly Politicized

In a bid to maintain an innovation-friendly image and alleviate transatlantic tensions, certain EU officials have come to view endorsements from Big Tech as crucial for the success of the Code. This mentality undermines the true objectives of the Code, allowing providers to exploit their non-signing as leverage to dilute its substance. Furthermore, US companies have used their refusal to sign as a message of solidarity with the US government, which is increasingly antagonistic towards European digital regulations.

The Code is intended to serve as a technical tool for compliance. Should providers fail to adhere to it, they must resort to alternative compliance methods, which require significant effort to demonstrate that they meet the objectives of the AI Act. While the Code offers a clear path to compliance, these alternative methods can be cumbersome and costly.

Complain, Then Comply Strategy

A fundamental purpose of regulation is to align profit-driven companies with the public interest. In response to regulatory pressures, companies often respond with resistance, claiming that new rules are unworkable. Yet, history shows that firms like Google have eventually complied with regulations they initially deemed unfeasible.

The European Commission must not succumb to corporate lobbying tactics. Although companies may express discontent with new regulations, the Commission must ensure that the Code reflects the intent of the AI Act, prioritizing the interests and rights of European citizens. A special committee within the European Parliament has been established to monitor the implementation of the AI Act, indicating a commitment to enforcement.

Resist the Pressure

The European Commission has a duty to uphold the integrity of the Code of Practice, ensuring it aligns with the spirit of the AI Act as agreed upon by co-legislators. It is essential to protect the rights of European citizens and the public interest. Should the efforts of over 1,000 stakeholders collapse to the demands of a few leading AI companies, it would significantly damage civic engagement and democracy in the EU.

Ultimately, the Commission has the authority to adopt the Code, even in a more stringent form, without the signatures of the concerned companies. This would establish the Code as the official framework for assessing GPAI compliance with the AI Act, compelling non-signatories to comply if they wish to access the European market and adhere to the global standard of care established by the Code.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...