Big Tech Influences EU’s AI Code of Practice

Big Tech’s Influence on AI Regulations

Recent reports indicate that Big Tech companies have exerted considerable pressure on the European Commission to dilute the Code of Practice on General Purpose AI. This Code is intended to assist AI model providers in adhering to the EU’s AI Act, but findings from a joint investigation by Corporate Europe Observatory (CEO) and LobbyControl suggest that the rules surrounding advanced AI have been significantly weakened.

Access Disparities in Drafting Process

According to the report, tech companies enjoyed structural advantages during the drafting phase of the Code, which began in September. Thirteen experts, appointed by the European Commission, facilitated the process using plenary sessions and workshops, allowing around 1,000 participants to provide feedback.

However, the research highlights that model providers—which include prominent tech giants such as Google, Microsoft, Meta, Amazon, and OpenAI—had exclusive access to dedicated workshops with the working group chairs. In contrast, other stakeholders like civil society organizations, publishers, and small and medium-sized enterprises (SMEs) faced limited participation, primarily interacting through emoji-based upvoting of questions and comments on the online platform SLIDO.

Concerns Over Copyright and Innovation

The drafting process has faced criticism from various stakeholders, particularly rights holders and publishers, who are concerned that the Code may conflict with existing copyright law. The backlash raises questions about the balance between innovation and regulation, particularly in the fast-evolving landscape of AI technology.

Political Pressure and Delays

Adding to the complexity, a spokesperson for the Commission confirmed receipt of a letter from the US government’s Mission to the EU. The letter expressed concerns about the Code, with the administration led by Republican President Donald Trump criticizing the EU’s digital regulations as inhibiting innovation.

Researcher Bram Vranken from CEO stated, “The EU Commission’s obsession with ‘simplification’ and ‘competitiveness’ is opening the door to aggressive Big Tech lobbying. The Code of Practice is only among the first casualties of this single-minded focus on deregulation.”

Future of the Code and Anticipated Publication

While the final version of the Code was initially scheduled for release in early May, it now appears likely to face delays. The Commission has yet to confirm whether the 2 May deadline will be met. However, both the guidelines on general-purpose AI and the final General-Purpose AI Code of Practice are expected to be published in May or June 2025.

According to a consultation document from the Commission, the final text is projected to be released ahead of August 2025, the date when the rules governing GP AI tools are set to come into effect. The EU executive retains the ability to formalize the Code through an implementing act, which would render it fully applicable by 2027.

This evolving situation underscores the ongoing tension between regulatory frameworks and technological advancement, highlighting the critical role that lobbying and stakeholder access play in shaping public policy.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...