Big Tech Influences EU’s AI Code of Practice

Big Tech’s Influence on AI Regulations

Recent reports indicate that Big Tech companies have exerted considerable pressure on the European Commission to dilute the Code of Practice on General Purpose AI. This Code is intended to assist AI model providers in adhering to the EU’s AI Act, but findings from a joint investigation by Corporate Europe Observatory (CEO) and LobbyControl suggest that the rules surrounding advanced AI have been significantly weakened.

Access Disparities in Drafting Process

According to the report, tech companies enjoyed structural advantages during the drafting phase of the Code, which began in September. Thirteen experts, appointed by the European Commission, facilitated the process using plenary sessions and workshops, allowing around 1,000 participants to provide feedback.

However, the research highlights that model providers—which include prominent tech giants such as Google, Microsoft, Meta, Amazon, and OpenAI—had exclusive access to dedicated workshops with the working group chairs. In contrast, other stakeholders like civil society organizations, publishers, and small and medium-sized enterprises (SMEs) faced limited participation, primarily interacting through emoji-based upvoting of questions and comments on the online platform SLIDO.

Concerns Over Copyright and Innovation

The drafting process has faced criticism from various stakeholders, particularly rights holders and publishers, who are concerned that the Code may conflict with existing copyright law. The backlash raises questions about the balance between innovation and regulation, particularly in the fast-evolving landscape of AI technology.

Political Pressure and Delays

Adding to the complexity, a spokesperson for the Commission confirmed receipt of a letter from the US government’s Mission to the EU. The letter expressed concerns about the Code, with the administration led by Republican President Donald Trump criticizing the EU’s digital regulations as inhibiting innovation.

Researcher Bram Vranken from CEO stated, “The EU Commission’s obsession with ‘simplification’ and ‘competitiveness’ is opening the door to aggressive Big Tech lobbying. The Code of Practice is only among the first casualties of this single-minded focus on deregulation.”

Future of the Code and Anticipated Publication

While the final version of the Code was initially scheduled for release in early May, it now appears likely to face delays. The Commission has yet to confirm whether the 2 May deadline will be met. However, both the guidelines on general-purpose AI and the final General-Purpose AI Code of Practice are expected to be published in May or June 2025.

According to a consultation document from the Commission, the final text is projected to be released ahead of August 2025, the date when the rules governing GP AI tools are set to come into effect. The EU executive retains the ability to formalize the Code through an implementing act, which would render it fully applicable by 2027.

This evolving situation underscores the ongoing tension between regulatory frameworks and technological advancement, highlighting the critical role that lobbying and stakeholder access play in shaping public policy.

More Insights

Transforming Corporate Governance: The Impact of the EU AI Act

This research project investigates how the EU Artificial Intelligence Act is transforming corporate governance and accountability frameworks, compelling companies to reconfigure responsibilities and...

AI-Driven Cybersecurity: Bridging the Accountability Gap

As organizations increasingly adopt AI to drive innovation, they face a dual challenge: while AI enhances cybersecurity measures, it simultaneously facilitates more sophisticated cyberattacks. The...

Thailand’s Comprehensive AI Governance Strategy

Thailand is drafting principles for artificial intelligence (AI) legislation aimed at establishing an AI ecosystem and enhancing user protection from potential risks. The legislation will remove legal...

Texas Implements Groundbreaking AI Regulations in Healthcare

Texas has enacted comprehensive AI governance laws, including the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) and Senate Bill 1188, which establish a framework for responsible AI...

AI Governance: Balancing Innovation and Oversight

Riskonnect has launched its new AI Governance solution, enabling organizations to manage the risks and compliance obligations of AI technologies while fostering innovation. The solution integrates...

AI Alignment: Ensuring Technology Serves Human Values

Gillian K. Hadfield has been appointed as the Bloomberg Distinguished Professor of AI Alignment and Governance at Johns Hopkins University, where she will focus on ensuring that artificial...

The Ethical Dilemma of Face Swap Technology

As AI technology evolves, face swap tools are increasingly misused for creating non-consensual explicit content, leading to significant ethical, emotional, and legal consequences. This article...

The Illusion of Influence: The EU AI Act’s Global Reach

The EU AI Act, while aiming to set a regulatory framework for artificial intelligence, faces challenges in influencing other countries due to differing legal and cultural values. This has led to the...

The Illusion of Influence: The EU AI Act’s Global Reach

The EU AI Act, while aiming to set a regulatory framework for artificial intelligence, faces challenges in influencing other countries due to differing legal and cultural values. This has led to the...