EU Considers Delay in AI Act Enforcement Amid Industry Pushback

Will the EU Delay Enforcing its AI Act?

As the deadline approaches for the enforcement of parts of the European Union’s AI Act, a growing number of companies and politicians are advocating for a delay. This act, which is set to come into force on August 2, 2025, has become a focal point of discussion as various stakeholders express concerns over its implementation.

Current Situation

With less than a month remaining before the AI Act’s provisions are scheduled to take effect, numerous companies, particularly those in the tech sector, are calling for a pause. Groups representing major U.S. tech firms, including Google and Meta, as well as European companies like Mistral and ASML, have urged the European Commission to postpone the AI Act’s enforcement by several years.

The AI Act is designed to regulate the use of artificial intelligence technologies, particularly focusing on general purpose AI (GPAI) models. These regulations aim to ensure compliance with various standards, including transparency and fairness in AI systems.

Implications of the AI Act

The enforcement of the AI Act is expected to impose additional compliance costs on AI companies. The requirements, especially for those developing AI models, are perceived as significantly stringent. Key provisions include:

  • Transparency requirements for foundation models, necessitating detailed documentation and compliance with EU copyright laws.
  • Obligations to test AI systems for bias, toxicity, and robustness prior to their launch.
  • For high-risk GPAI models, mandatory model evaluations, risk assessments, and reporting of serious incidents to the European Commission.

Concerns Over Compliance

Many companies are expressing uncertainty regarding compliance with the new rules due to the absence of clear guidelines. The AI Code of Practice, intended to assist AI developers in navigating the regulations, has already missed its publication deadline, which was set for May 2, 2025.

A coalition of 45 European companies has formally requested a two-year ‘clock-stop’ on the AI Act, citing the need for clarity and simplification of the new rules. They argue that without proper guidelines, the current environment creates significant uncertainty for AI developers.

Political Reactions

Some political leaders, including Swedish Prime Minister Ulf Kristersson, have labeled the AI rules as “confusing” and suggested a pause in their implementation. The European AI Board is currently deliberating on the timing for the implementation of the Code of Practice, with a potential extension into 2025 being considered.

The Future of AI Regulation in Europe

While the European Commission is preparing for the enforcement of GPAI rules, the publication of crucial guidance documents is expected to be delayed by six months beyond the original deadline. This situation has led to calls from tech lobbying groups for an urgent intervention to provide legal certainty for AI developers.

As the landscape of AI regulation evolves, the balance between fostering innovation and ensuring compliance remains a critical concern. The forthcoming decisions regarding the AI Act will significantly shape the future of AI development and deployment within the European Union.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...