Lobbyists Intensify Efforts Against AI Code of Practice

Lobbyists Push Back Against AI Code of Practice

Lobbyists are making a last-ditch attempt to delay rules for General Purpose AI (GPAI), ahead of the European Commission’s expected publication of the much-anticipated voluntary Code of Practice in the coming days.

The Code, which will apply to multipurpose AI models that underpin technologies like OpenAI’s ChatGPT, has been surrounded by significant tension. Nearly 1,000 lobbyists and experts have participated in the drafting process alongside independent chairs.

Calls to Delay Implementation

In parallel, industry representatives and the Council’s Polish Presidency have suggested that the Commission “stops the clock” on the implementation of the AI Act, given that multiple guidelines and standards are still pending. In early June, Tech Commissioner Henna Virkkunen indicated to the Council that postponing parts of the act should “not be ruled out” if necessary implementation tools are not ready.

The Code of Practice is intended to assist AI developers in complying with the law’s rules for GPAIs, which are expected to take effect on August 2.

Concerns Over Innovation

While the Commission has not formally closed the door on extending some AI Act deadlines, it has indicated that the rules for GPAIs will indeed apply in August. This has not deterred Big Tech lobby group CCIA Europe from appealing to EU heads of government to delay the GPAI rules. With the Code still not finalized weeks before the rules are set to take effect, CCIA Europe’s Head of Office, Daniel Friedlaender, warned that the EU risks “stalling innovation altogether.”

Support for AI Act Implementation

In response, academics and civil society have voiced their support through an open letter defending the AI Act and urging the Commission to “resist pressure” to derail the rules. The letter highlights systemic risks associated with GPAI models, citing potential threats related to cybersecurity, biological, radiological, and nuclear capabilities.

Members of the European Parliament (MEPs) have also expressed their support for the timely implementation of the act. MEP Michael McNamara, co-chair of the Parliament working group on the AI Act, emphasized that a considerable effort is now required to finalize the Code of Practice and the necessary standards for conformity assessment without further delays.

“A failure to bring the Code of Practice and the governance rules for GPAI models into force as planned this year would result in a significant loss of credibility for the EU that would extend far beyond the AI Act,” McNamara stated.

MEP Sergey Lagodinsky, who was involved in negotiating the AI Act, echoed the sentiment, asserting that “robust mechanisms” are necessary to ensure the law is effectively implemented and enforced.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...