US Tech Giants Undermine EU’s AI Governance Efforts

How US Firms Are Weakening the EU AI Code of Practice

The EU AI Act presents the first comprehensive framework for governing artificial intelligence (AI). For the most powerful models, known as general-purpose AI (GPAI), a Code of Practice is currently being drafted to streamline compliance for a small number of leading AI companies. This Code is developed through an iterative process involving nearly 1,000 stakeholders from industry, civil society, and academia, led by 13 expert-chaired working groups.

The final text of the Code is expected to be published by August 2025. However, as the process nears completion, the European Commission has granted privileged access to a select few leading US companies, who advocate for a diluted version of the Code. This raises concerns regarding the legitimacy of the process and undermines the EU AI Act’s intent to serve the interests of European citizens.

An Inclusive Process, But for Whom?

The drafting of the Code of Practice has been unprecedented in its inclusivity. GPAI providers were given a special role from the beginning, but as the process concludes, the critical question remains whether the influential US companies will recognize that the rules for GPAI are a matter of public interest and cannot be dictated solely by them. By lobbying the European Commission to prioritize their interests, these companies jeopardize the entire process and compromise their credibility as responsible corporate citizens.

Ironically, the industry conflates weak regulation with innovation, hoping to benefit from the Commission’s recent push to position the EU as a global leader in AI. However, this perspective is fundamentally flawed. Critics argue that the real issues in Europe are not stringent regulations but rather market fragmentation and a lack of AI adoption.

The Code Has Become Overly Politicized

In a bid to maintain an innovation-friendly image and alleviate transatlantic tensions, certain EU officials have come to view endorsements from Big Tech as crucial for the success of the Code. This mentality undermines the true objectives of the Code, allowing providers to exploit their non-signing as leverage to dilute its substance. Furthermore, US companies have used their refusal to sign as a message of solidarity with the US government, which is increasingly antagonistic towards European digital regulations.

The Code is intended to serve as a technical tool for compliance. Should providers fail to adhere to it, they must resort to alternative compliance methods, which require significant effort to demonstrate that they meet the objectives of the AI Act. While the Code offers a clear path to compliance, these alternative methods can be cumbersome and costly.

Complain, Then Comply Strategy

A fundamental purpose of regulation is to align profit-driven companies with the public interest. In response to regulatory pressures, companies often respond with resistance, claiming that new rules are unworkable. Yet, history shows that firms like Google have eventually complied with regulations they initially deemed unfeasible.

The European Commission must not succumb to corporate lobbying tactics. Although companies may express discontent with new regulations, the Commission must ensure that the Code reflects the intent of the AI Act, prioritizing the interests and rights of European citizens. A special committee within the European Parliament has been established to monitor the implementation of the AI Act, indicating a commitment to enforcement.

Resist the Pressure

The European Commission has a duty to uphold the integrity of the Code of Practice, ensuring it aligns with the spirit of the AI Act as agreed upon by co-legislators. It is essential to protect the rights of European citizens and the public interest. Should the efforts of over 1,000 stakeholders collapse to the demands of a few leading AI companies, it would significantly damage civic engagement and democracy in the EU.

Ultimately, the Commission has the authority to adopt the Code, even in a more stringent form, without the signatures of the concerned companies. This would establish the Code as the official framework for assessing GPAI compliance with the AI Act, compelling non-signatories to comply if they wish to access the European market and adhere to the global standard of care established by the Code.

More Insights

G7 Summit Fails to Address Urgent AI Governance Needs

At the recent G7 summit in Canada, discussions primarily focused on economic opportunities related to AI, while governance issues for AI systems were notably overlooked. This shift towards...

Africa’s Bold Move Towards Sovereign AI Governance

At the Internet Governance Forum (IGF) 2025 in Oslo, African leaders called for urgent action to develop sovereign and ethical AI systems tailored to local needs, emphasizing the necessity for...

Top 10 Compliance Challenges in AI Regulations

As AI technology advances, the challenge of establishing effective regulations becomes increasingly complex, with different countries adopting varying approaches. This regulatory divergence poses...

China’s Unique Approach to Embodied AI

China's approach to artificial intelligence emphasizes the development of "embodied AI," which interacts with the physical environment, leveraging the country's strengths in manufacturing and...

Workday Sets New Standards in Responsible AI Governance

Workday has recently received dual third-party accreditations for its AI Governance Program, highlighting its commitment to responsible and transparent AI. Dr. Kelly Trindle, Chief Responsible AI...

AI Adoption in UK Finance: Balancing Innovation and Compliance

A recent survey by Smarsh reveals that while UK finance workers are increasingly adopting AI tools, there are significant concerns regarding compliance and oversight. Many employees express a desire...

AI Ethics Amid US-China Tensions: A Call for Global Standards

As the US-China tech rivalry intensifies, a UN agency is advocating for global AI ethics standards, highlighted during UNESCO's Global Forum on the Ethics of Artificial Intelligence in Bangkok...

Mastering Compliance with the EU AI Act Through Advanced DSPM Solutions

The EU AI Act emphasizes the importance of compliance for organizations deploying AI technologies, with Zscaler’s Data Security Posture Management (DSPM) playing a crucial role in ensuring data...

US Lawmakers Push to Ban Adversarial AI Amid National Security Concerns

A bipartisan group of U.S. lawmakers has introduced the "No Adversarial AI Act," aiming to ban the use of artificial intelligence tools from countries like China, Russia, Iran, and North Korea in...