AI Act Implementation Faces Calls for Delay from Industry Leaders

European and American Companies Call for AI Act Postponement

On July 4, 2025, a significant number of European and American companies raised their voices in unison, urging the European Union (EU) to delay the implementation of the AI Act for a minimum of two years. This collective plea stems from concerns that the proposed legislation could potentially stifle the development of artificial intelligence (AI) technologies within the EU.

The Collective Appeal

This call to action was encapsulated in a letter addressed to Ursula von der Leyen, the President of the European Commission. A total of 45 organizations, including notable players like ASML Holding NV, Airbus SE, and Mistral AI—the French counterpart to OpenAI—signed the document. Additionally, groups representing tech giants Google and Meta echoed similar sentiments, citing comparable concerns regarding the AI Act.

Implications of the AI Act

The European Commission has previously stated that regulations governing general-purpose AI (GPAI) models are slated to take effect on August 2, with enforcement expected to commence in 2026. Companies advocating for the postponement are advocating for a more innovation-friendly regulatory approach concerning rules that govern general-purpose AI models and high-risk AI systems.

The Urgency of the Situation

The letter emphasizes the growing uncertainty surrounding the AI Act and its implications for the tech industry. “To address the uncertainty this situation is creating, we urge the Commission to propose a two-year ‘clock-stop’ on the AI Act before key obligations enter into force,” the letter articulates.

Testing and Compliance Requirements

Under the current provisions of the AI Act, all companies are mandated to undertake rigorous testing of their AI models for aspects such as bias, toxicity, and robustness prior to public release. Furthermore, developers are expected to supply the European Commission with comprehensive technical documentation, adhere to EU copyright laws, and maintain transparency regarding the content utilized for training their algorithms.

Reporting Obligations

Additionally, AI firms must submit periodic reports detailing their energy efficiency and any serious incidents related to their AI systems to the European Commission. The letter concludes by stating, “This postponement, coupled with a commitment to prioritize regulatory quality over speed, would send innovators and investors around the world a strong signal that Europe is serious about its simplification and competitiveness agenda.”

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...