EU Considers Delay in AI Act Enforcement Amid Industry Pushback

Will the EU Delay Enforcing its AI Act?

As the deadline approaches for the enforcement of parts of the European Union’s AI Act, a growing number of companies and politicians are advocating for a delay. This act, which is set to come into force on August 2, 2025, has become a focal point of discussion as various stakeholders express concerns over its implementation.

Current Situation

With less than a month remaining before the AI Act’s provisions are scheduled to take effect, numerous companies, particularly those in the tech sector, are calling for a pause. Groups representing major U.S. tech firms, including Google and Meta, as well as European companies like Mistral and ASML, have urged the European Commission to postpone the AI Act’s enforcement by several years.

The AI Act is designed to regulate the use of artificial intelligence technologies, particularly focusing on general purpose AI (GPAI) models. These regulations aim to ensure compliance with various standards, including transparency and fairness in AI systems.

Implications of the AI Act

The enforcement of the AI Act is expected to impose additional compliance costs on AI companies. The requirements, especially for those developing AI models, are perceived as significantly stringent. Key provisions include:

  • Transparency requirements for foundation models, necessitating detailed documentation and compliance with EU copyright laws.
  • Obligations to test AI systems for bias, toxicity, and robustness prior to their launch.
  • For high-risk GPAI models, mandatory model evaluations, risk assessments, and reporting of serious incidents to the European Commission.

Concerns Over Compliance

Many companies are expressing uncertainty regarding compliance with the new rules due to the absence of clear guidelines. The AI Code of Practice, intended to assist AI developers in navigating the regulations, has already missed its publication deadline, which was set for May 2, 2025.

A coalition of 45 European companies has formally requested a two-year ‘clock-stop’ on the AI Act, citing the need for clarity and simplification of the new rules. They argue that without proper guidelines, the current environment creates significant uncertainty for AI developers.

Political Reactions

Some political leaders, including Swedish Prime Minister Ulf Kristersson, have labeled the AI rules as “confusing” and suggested a pause in their implementation. The European AI Board is currently deliberating on the timing for the implementation of the Code of Practice, with a potential extension into 2025 being considered.

The Future of AI Regulation in Europe

While the European Commission is preparing for the enforcement of GPAI rules, the publication of crucial guidance documents is expected to be delayed by six months beyond the original deadline. This situation has led to calls from tech lobbying groups for an urgent intervention to provide legal certainty for AI developers.

As the landscape of AI regulation evolves, the balance between fostering innovation and ensuring compliance remains a critical concern. The forthcoming decisions regarding the AI Act will significantly shape the future of AI development and deployment within the European Union.

More Insights

G7 Summit Fails to Address Urgent AI Governance Needs

At the recent G7 summit in Canada, discussions primarily focused on economic opportunities related to AI, while governance issues for AI systems were notably overlooked. This shift towards...

Africa’s Bold Move Towards Sovereign AI Governance

At the Internet Governance Forum (IGF) 2025 in Oslo, African leaders called for urgent action to develop sovereign and ethical AI systems tailored to local needs, emphasizing the necessity for...

Top 10 Compliance Challenges in AI Regulations

As AI technology advances, the challenge of establishing effective regulations becomes increasingly complex, with different countries adopting varying approaches. This regulatory divergence poses...

China’s Unique Approach to Embodied AI

China's approach to artificial intelligence emphasizes the development of "embodied AI," which interacts with the physical environment, leveraging the country's strengths in manufacturing and...

Workday Sets New Standards in Responsible AI Governance

Workday has recently received dual third-party accreditations for its AI Governance Program, highlighting its commitment to responsible and transparent AI. Dr. Kelly Trindle, Chief Responsible AI...

AI Adoption in UK Finance: Balancing Innovation and Compliance

A recent survey by Smarsh reveals that while UK finance workers are increasingly adopting AI tools, there are significant concerns regarding compliance and oversight. Many employees express a desire...

AI Ethics Amid US-China Tensions: A Call for Global Standards

As the US-China tech rivalry intensifies, a UN agency is advocating for global AI ethics standards, highlighted during UNESCO's Global Forum on the Ethics of Artificial Intelligence in Bangkok...

Mastering Compliance with the EU AI Act Through Advanced DSPM Solutions

The EU AI Act emphasizes the importance of compliance for organizations deploying AI technologies, with Zscaler’s Data Security Posture Management (DSPM) playing a crucial role in ensuring data...

US Lawmakers Push to Ban Adversarial AI Amid National Security Concerns

A bipartisan group of U.S. lawmakers has introduced the "No Adversarial AI Act," aiming to ban the use of artificial intelligence tools from countries like China, Russia, Iran, and North Korea in...