US Tech Giants Undermine EU’s AI Governance Efforts

How US Firms Are Weakening the EU AI Code of Practice

The EU AI Act presents the first comprehensive framework for governing artificial intelligence (AI). For the most powerful models, known as general-purpose AI (GPAI), a Code of Practice is currently being drafted to streamline compliance for a small number of leading AI companies. This Code is developed through an iterative process involving nearly 1,000 stakeholders from industry, civil society, and academia, led by 13 expert-chaired working groups.

The final text of the Code is expected to be published by August 2025. However, as the process nears completion, the European Commission has granted privileged access to a select few leading US companies, who advocate for a diluted version of the Code. This raises concerns regarding the legitimacy of the process and undermines the EU AI Act’s intent to serve the interests of European citizens.

An Inclusive Process, But for Whom?

The drafting of the Code of Practice has been unprecedented in its inclusivity. GPAI providers were given a special role from the beginning, but as the process concludes, the critical question remains whether the influential US companies will recognize that the rules for GPAI are a matter of public interest and cannot be dictated solely by them. By lobbying the European Commission to prioritize their interests, these companies jeopardize the entire process and compromise their credibility as responsible corporate citizens.

Ironically, the industry conflates weak regulation with innovation, hoping to benefit from the Commission’s recent push to position the EU as a global leader in AI. However, this perspective is fundamentally flawed. Critics argue that the real issues in Europe are not stringent regulations but rather market fragmentation and a lack of AI adoption.

The Code Has Become Overly Politicized

In a bid to maintain an innovation-friendly image and alleviate transatlantic tensions, certain EU officials have come to view endorsements from Big Tech as crucial for the success of the Code. This mentality undermines the true objectives of the Code, allowing providers to exploit their non-signing as leverage to dilute its substance. Furthermore, US companies have used their refusal to sign as a message of solidarity with the US government, which is increasingly antagonistic towards European digital regulations.

The Code is intended to serve as a technical tool for compliance. Should providers fail to adhere to it, they must resort to alternative compliance methods, which require significant effort to demonstrate that they meet the objectives of the AI Act. While the Code offers a clear path to compliance, these alternative methods can be cumbersome and costly.

Complain, Then Comply Strategy

A fundamental purpose of regulation is to align profit-driven companies with the public interest. In response to regulatory pressures, companies often respond with resistance, claiming that new rules are unworkable. Yet, history shows that firms like Google have eventually complied with regulations they initially deemed unfeasible.

The European Commission must not succumb to corporate lobbying tactics. Although companies may express discontent with new regulations, the Commission must ensure that the Code reflects the intent of the AI Act, prioritizing the interests and rights of European citizens. A special committee within the European Parliament has been established to monitor the implementation of the AI Act, indicating a commitment to enforcement.

Resist the Pressure

The European Commission has a duty to uphold the integrity of the Code of Practice, ensuring it aligns with the spirit of the AI Act as agreed upon by co-legislators. It is essential to protect the rights of European citizens and the public interest. Should the efforts of over 1,000 stakeholders collapse to the demands of a few leading AI companies, it would significantly damage civic engagement and democracy in the EU.

Ultimately, the Commission has the authority to adopt the Code, even in a more stringent form, without the signatures of the concerned companies. This would establish the Code as the official framework for assessing GPAI compliance with the AI Act, compelling non-signatories to comply if they wish to access the European market and adhere to the global standard of care established by the Code.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...