AI Act Implementation Faces Calls for Delay from Industry Leaders

European and American Companies Call for AI Act Postponement

On July 4, 2025, a significant number of European and American companies raised their voices in unison, urging the European Union (EU) to delay the implementation of the AI Act for a minimum of two years. This collective plea stems from concerns that the proposed legislation could potentially stifle the development of artificial intelligence (AI) technologies within the EU.

The Collective Appeal

This call to action was encapsulated in a letter addressed to Ursula von der Leyen, the President of the European Commission. A total of 45 organizations, including notable players like ASML Holding NV, Airbus SE, and Mistral AI—the French counterpart to OpenAI—signed the document. Additionally, groups representing tech giants Google and Meta echoed similar sentiments, citing comparable concerns regarding the AI Act.

Implications of the AI Act

The European Commission has previously stated that regulations governing general-purpose AI (GPAI) models are slated to take effect on August 2, with enforcement expected to commence in 2026. Companies advocating for the postponement are advocating for a more innovation-friendly regulatory approach concerning rules that govern general-purpose AI models and high-risk AI systems.

The Urgency of the Situation

The letter emphasizes the growing uncertainty surrounding the AI Act and its implications for the tech industry. “To address the uncertainty this situation is creating, we urge the Commission to propose a two-year ‘clock-stop’ on the AI Act before key obligations enter into force,” the letter articulates.

Testing and Compliance Requirements

Under the current provisions of the AI Act, all companies are mandated to undertake rigorous testing of their AI models for aspects such as bias, toxicity, and robustness prior to public release. Furthermore, developers are expected to supply the European Commission with comprehensive technical documentation, adhere to EU copyright laws, and maintain transparency regarding the content utilized for training their algorithms.

Reporting Obligations

Additionally, AI firms must submit periodic reports detailing their energy efficiency and any serious incidents related to their AI systems to the European Commission. The letter concludes by stating, “This postponement, coupled with a commitment to prioritize regulatory quality over speed, would send innovators and investors around the world a strong signal that Europe is serious about its simplification and competitiveness agenda.”

More Insights

Rethinking AI Innovation: Beyond Competition to Collaboration

The relentless pursuit of artificial intelligence is reshaping our world, challenging our ethics, and redefining what it means to be human. As the pace of AI innovation accelerates without a clear...

Pakistan’s Ambitious National AI Policy: A Path to Innovation and Job Creation

Pakistan has introduced an ambitious National AI Policy aimed at building a $2.7 billion domestic AI market in five years, focusing on innovation, skills, ethical use, and international collaboration...

Implementing Ethical AI Governance for Long-Term Success

This practical guide emphasizes the critical need for ethical governance in AI deployment, detailing actionable steps for organizations to manage ethical risks and integrate ethical principles into...

Transforming Higher Education with AI: Strategies for Success

Artificial intelligence is transforming higher education by enhancing teaching, learning, and operations, providing personalized support for student success and improving institutional resilience. As...

AI Governance for Sustainable Growth in Africa

Artificial Intelligence (AI) is transforming various sectors in Africa, but responsible governance is essential to mitigate risks such as bias and privacy violations. Ghana's newly launched National...

AI Disruption: Preparing for the Workforce Transformation

The AI economic transformation is underway, with companies like IBM and Salesforce laying off employees in favor of automation. As concerns about job losses mount, policymakers must understand public...

Accountability in the Age of AI Workforces

Digital labor is increasingly prevalent in the workplace, yet there are few established rules governing its use. Executives face the challenge of defining operational guidelines and responsibilities...

Anthropic Launches Petri Tool for Automated AI Safety Audits

Anthropic has launched Petri, an open-source AI safety auditing tool that automates the testing of large language models for risky behaviors. The tool aims to enhance collaboration and standardization...

EU AI Act and GDPR: Finding Common Ground

The EU AI Act is increasingly relevant to legal professionals, drawing parallels with the GDPR in areas such as risk management and accountability. Both regulations emphasize transparency and require...