Big Tech Influences EU’s AI Code of Practice

Big Tech’s Influence on AI Regulations

Recent reports indicate that Big Tech companies have exerted considerable pressure on the European Commission to dilute the Code of Practice on General Purpose AI. This Code is intended to assist AI model providers in adhering to the EU’s AI Act, but findings from a joint investigation by Corporate Europe Observatory (CEO) and LobbyControl suggest that the rules surrounding advanced AI have been significantly weakened.

Access Disparities in Drafting Process

According to the report, tech companies enjoyed structural advantages during the drafting phase of the Code, which began in September. Thirteen experts, appointed by the European Commission, facilitated the process using plenary sessions and workshops, allowing around 1,000 participants to provide feedback.

However, the research highlights that model providers—which include prominent tech giants such as Google, Microsoft, Meta, Amazon, and OpenAI—had exclusive access to dedicated workshops with the working group chairs. In contrast, other stakeholders like civil society organizations, publishers, and small and medium-sized enterprises (SMEs) faced limited participation, primarily interacting through emoji-based upvoting of questions and comments on the online platform SLIDO.

Concerns Over Copyright and Innovation

The drafting process has faced criticism from various stakeholders, particularly rights holders and publishers, who are concerned that the Code may conflict with existing copyright law. The backlash raises questions about the balance between innovation and regulation, particularly in the fast-evolving landscape of AI technology.

Political Pressure and Delays

Adding to the complexity, a spokesperson for the Commission confirmed receipt of a letter from the US government’s Mission to the EU. The letter expressed concerns about the Code, with the administration led by Republican President Donald Trump criticizing the EU’s digital regulations as inhibiting innovation.

Researcher Bram Vranken from CEO stated, “The EU Commission’s obsession with ‘simplification’ and ‘competitiveness’ is opening the door to aggressive Big Tech lobbying. The Code of Practice is only among the first casualties of this single-minded focus on deregulation.”

Future of the Code and Anticipated Publication

While the final version of the Code was initially scheduled for release in early May, it now appears likely to face delays. The Commission has yet to confirm whether the 2 May deadline will be met. However, both the guidelines on general-purpose AI and the final General-Purpose AI Code of Practice are expected to be published in May or June 2025.

According to a consultation document from the Commission, the final text is projected to be released ahead of August 2025, the date when the rules governing GP AI tools are set to come into effect. The EU executive retains the ability to formalize the Code through an implementing act, which would render it fully applicable by 2027.

This evolving situation underscores the ongoing tension between regulatory frameworks and technological advancement, highlighting the critical role that lobbying and stakeholder access play in shaping public policy.

More Insights

Building Trust in AI: Strategies for a Secure Future

The Digital Trust Summit 2025 highlighted the urgent need for organizations to embed trust, fairness, and transparency into AI systems from the outset. As AI continues to evolve, strong governance and...

Rethinking Cloud Governance for AI Innovation

As organizations embrace AI innovations, they often overlook the need for updated cloud governance models that can keep pace with rapid advancements. Effective governance should be proactive and...

AI Governance: A Guide for Board Leaders

The Confederation of Indian Industry (CII) has released a guidebook aimed at helping company boards responsibly adopt and govern Artificial Intelligence (AI) technologies. The publication emphasizes...

Harnessing AI for Secure DevSecOps in a Zero-Trust Environment

The article discusses the implications of AI-powered automation in DevSecOps, highlighting the balance between efficiency and the risks associated with reliance on AI in security practices. It...

Establishing India’s First Centre for AI, Law & Regulation

Cyril Amarchand Mangaldas, Cyril Shroff, and O.P. Jindal Global University have announced the establishment of the Cyril Shroff Centre for AI, Law & Regulation, the first dedicated centre in India...

Revolutionizing AI Governance for Local Agencies with a Free Policy Tool

Darwin has launched its AI Policy Wizard, a free and interactive tool designed to assist local governments and public agencies in creating customized AI policies. The tool simplifies the process by...

Building Trust in AI Through Effective Governance

Ulla Coester emphasizes the importance of adaptable governance in building trust in AI, highlighting that unclear threats complicate global confidence in the technology. She advocates for...

Building Trustworthy AI Through Cultural Engagement

This report emphasizes the importance of inclusive AI governance to ensure diverse voices, especially from the Global South, are involved in AI access and development decisions. It highlights the...

AI Compliance: Copyright Challenges in the EU AI Act

The EU AI Act emphasizes the importance of copyright compliance for generative AI models, particularly regarding the use of vast datasets for training. It requires general-purpose AI providers to...