US Tech Giants Undermine EU’s AI Governance Efforts

How US Firms Are Weakening the EU AI Code of Practice

The EU AI Act presents the first comprehensive framework for governing artificial intelligence (AI). For the most powerful models, known as general-purpose AI (GPAI), a Code of Practice is currently being drafted to streamline compliance for a small number of leading AI companies. This Code is developed through an iterative process involving nearly 1,000 stakeholders from industry, civil society, and academia, led by 13 expert-chaired working groups.

The final text of the Code is expected to be published by August 2025. However, as the process nears completion, the European Commission has granted privileged access to a select few leading US companies, who advocate for a diluted version of the Code. This raises concerns regarding the legitimacy of the process and undermines the EU AI Act’s intent to serve the interests of European citizens.

An Inclusive Process, But for Whom?

The drafting of the Code of Practice has been unprecedented in its inclusivity. GPAI providers were given a special role from the beginning, but as the process concludes, the critical question remains whether the influential US companies will recognize that the rules for GPAI are a matter of public interest and cannot be dictated solely by them. By lobbying the European Commission to prioritize their interests, these companies jeopardize the entire process and compromise their credibility as responsible corporate citizens.

Ironically, the industry conflates weak regulation with innovation, hoping to benefit from the Commission’s recent push to position the EU as a global leader in AI. However, this perspective is fundamentally flawed. Critics argue that the real issues in Europe are not stringent regulations but rather market fragmentation and a lack of AI adoption.

The Code Has Become Overly Politicized

In a bid to maintain an innovation-friendly image and alleviate transatlantic tensions, certain EU officials have come to view endorsements from Big Tech as crucial for the success of the Code. This mentality undermines the true objectives of the Code, allowing providers to exploit their non-signing as leverage to dilute its substance. Furthermore, US companies have used their refusal to sign as a message of solidarity with the US government, which is increasingly antagonistic towards European digital regulations.

The Code is intended to serve as a technical tool for compliance. Should providers fail to adhere to it, they must resort to alternative compliance methods, which require significant effort to demonstrate that they meet the objectives of the AI Act. While the Code offers a clear path to compliance, these alternative methods can be cumbersome and costly.

Complain, Then Comply Strategy

A fundamental purpose of regulation is to align profit-driven companies with the public interest. In response to regulatory pressures, companies often respond with resistance, claiming that new rules are unworkable. Yet, history shows that firms like Google have eventually complied with regulations they initially deemed unfeasible.

The European Commission must not succumb to corporate lobbying tactics. Although companies may express discontent with new regulations, the Commission must ensure that the Code reflects the intent of the AI Act, prioritizing the interests and rights of European citizens. A special committee within the European Parliament has been established to monitor the implementation of the AI Act, indicating a commitment to enforcement.

Resist the Pressure

The European Commission has a duty to uphold the integrity of the Code of Practice, ensuring it aligns with the spirit of the AI Act as agreed upon by co-legislators. It is essential to protect the rights of European citizens and the public interest. Should the efforts of over 1,000 stakeholders collapse to the demands of a few leading AI companies, it would significantly damage civic engagement and democracy in the EU.

Ultimately, the Commission has the authority to adopt the Code, even in a more stringent form, without the signatures of the concerned companies. This would establish the Code as the official framework for assessing GPAI compliance with the AI Act, compelling non-signatories to comply if they wish to access the European market and adhere to the global standard of care established by the Code.

More Insights

Rethinking AI Innovation: Beyond Competition to Collaboration

The relentless pursuit of artificial intelligence is reshaping our world, challenging our ethics, and redefining what it means to be human. As the pace of AI innovation accelerates without a clear...

Pakistan’s Ambitious National AI Policy: A Path to Innovation and Job Creation

Pakistan has introduced an ambitious National AI Policy aimed at building a $2.7 billion domestic AI market in five years, focusing on innovation, skills, ethical use, and international collaboration...

Implementing Ethical AI Governance for Long-Term Success

This practical guide emphasizes the critical need for ethical governance in AI deployment, detailing actionable steps for organizations to manage ethical risks and integrate ethical principles into...

Transforming Higher Education with AI: Strategies for Success

Artificial intelligence is transforming higher education by enhancing teaching, learning, and operations, providing personalized support for student success and improving institutional resilience. As...

AI Governance for Sustainable Growth in Africa

Artificial Intelligence (AI) is transforming various sectors in Africa, but responsible governance is essential to mitigate risks such as bias and privacy violations. Ghana's newly launched National...

AI Disruption: Preparing for the Workforce Transformation

The AI economic transformation is underway, with companies like IBM and Salesforce laying off employees in favor of automation. As concerns about job losses mount, policymakers must understand public...

Accountability in the Age of AI Workforces

Digital labor is increasingly prevalent in the workplace, yet there are few established rules governing its use. Executives face the challenge of defining operational guidelines and responsibilities...

Anthropic Launches Petri Tool for Automated AI Safety Audits

Anthropic has launched Petri, an open-source AI safety auditing tool that automates the testing of large language models for risky behaviors. The tool aims to enhance collaboration and standardization...

EU AI Act and GDPR: Finding Common Ground

The EU AI Act is increasingly relevant to legal professionals, drawing parallels with the GDPR in areas such as risk management and accountability. Both regulations emphasize transparency and require...