Deregulation Threatens the Future of the AI Act

Europe’s Deregulatory Turn Puts the AI Act at Risk

In August 2024, the EU’s Artificial Intelligence Act was adopted following years of intense negotiations, numerous trilogues, and countless amendments. However, just a month later, it became a target for criticism. Former European Central Bank President Mario Draghi identified it as a regulatory barrier detrimental to the tech sector in his report on European competitiveness.

The European Commission soon announced its intention to pursue five simplification initiatives, including an assessment of the EU digital rulebook to ensure it meets the needs of businesses, particularly small and medium-sized enterprises (SMEs). Among the significant targets for this assessment was the AI Act, alongside the General Data Protection Regulation (GDPR).

Concerns regarding potential weakening of the AI Act heightened following statements made at the French AI Summit. Commission President Ursula Von der Leyen promised to cut red tape, leading to the withdrawal of the proposed AI Liability Directive from the Commission’s work program for 2025. This prompted backlash from civil society and members of the European Parliament.

Despite some reassurance from the European Commission’s AI Continent Action Plan, which focused on providing guidance rather than revisiting the AI Act, the writing was on the wall. The consultation linked to the plan invited stakeholders to propose measures for simplifying the AI Act, signaling a potential shift towards deregulation.

Simplification is a Dangerous Misnomer

The European Commission indicated that the primary focus of simplification would be on the reporting obligations under the AI Act. While the Commission claimed that changes would be targeted, there is a risk that even minor adjustments could have significant implications for AI safety. A key reporting obligation involves notifying authorities about serious incidents that may lead to harm or infringement of fundamental rights.

The AI Act’s reporting requirements are crucial for identifying and mitigating real harms associated with AI technologies. Concerns arise when these obligations, which were already perceived as insufficient, are proposed for simplification. For instance, providers can opt-out of high-risk categorization without notifying regulators, creating loopholes that may hinder accountability.

The Intersection with Existing Laws

Proponents of simplification often argue that the obligations of the AI Act overlap with those in other EU regulations, particularly the GDPR. However, this line of reasoning has been challenged, as the AI Act specifically addresses these intersections and clarifies how to comply with both frameworks.

Even if some aspects of overlap could be better managed, the AI Act provides a pathway through guidelines on its relationship with other EU laws, which are still under development. Any calls for changes under the guise of simplification beyond necessary clarifications should be approached with caution.

Evidence-Based Rule-Making Must Remain Central

While the specifics of any proposed amendments to the AI Act remain undefined, the prospect of reopening a largely unenforceable law raises alarms, particularly in light of industry calls for deregulation. A recent document titled the EU Economic Blueprint from OpenAI criticized EU regulations as impediments to innovation, urging policymakers to identify which rules should be preserved or discarded.

The EU AI Champions Initiative, supporting a significant investment in AI innovation, echoed similar sentiments regarding the AI Act’s market uncertainty due to unclear risk categorization. Such arguments exert pressure on the European Commission to adopt a broad-brush approach in their review, posing risks of substantial deregulation.

The first simplification initiative introduced this year serves as a cautionary tale. The first omnibus package, initially aimed at reducing overlapping obligations, resulted in significant dilution of legal frameworks without adequate consultation. This process has faced criticism from environmental NGOs, highlighting the risks of prioritizing industry interests over individual rights.

The European Commission must learn from these experiences and ensure that any review of the AI Act is rooted in evidence and broadly consulted. As the AI Act faces increasing threats, decision-makers should prioritize robust assessments of any proposed amendments to protect the core strengths of the EU digital rulebook and the fundamental rights secured within the AI Act.

More Insights

Responsible AI Principles for .NET Developers

In the era of Artificial Intelligence, trust in AI systems is crucial, especially in sensitive fields like banking and healthcare. This guide outlines Microsoft's six principles of Responsible...

EU AI Act Copyright Compliance Guidelines Unveiled

The EU AI Office has released a more workable draft of the Code of Practice for general-purpose model providers under the EU AI Act, which must be finalized by May 2. This draft outlines compliance...

Building Trust in the Age of AI: Compliance and Customer Confidence

Artificial intelligence holds great potential for marketers, provided it is supported by responsibly collected quality data. A recent panel discussion at the MarTech Conference emphasized the...

AI Transforming Risk and Compliance in Banking

In today's banking landscape, AI has become essential for managing risk and compliance, particularly in India, where regulatory demands are evolving rapidly. Financial institutions must integrate AI...

California’s Landmark AI Transparency Law: A New Era for Frontier Models

California lawmakers have passed a landmark AI transparency law, the Transparency in Frontier Artificial Intelligence Act (SB 53), aimed at enhancing accountability and public trust in advanced AI...

Ireland Establishes National AI Office to Oversee EU Act Implementation

The Government has designated 15 competent authorities under the EU's AI Act and plans to establish a National AI Office by August 2, 2026, to serve as the central coordinating authority in Ireland...

AI Recruitment Challenges and Legal Compliance

The increasing use of AI applications in recruitment offers efficiency benefits but also presents significant legal challenges, particularly under the EU AI Act and GDPR. Employers must ensure that AI...

Building Robust Guardrails for Responsible AI Implementation

As generative AI transforms business operations, deploying AI systems without proper guardrails is akin to driving a Formula 1 car without brakes. To successfully implement AI solutions, organizations...

Inclusive AI for Emerging Markets

Artificial Intelligence is transforming emerging markets, offering opportunities in education, healthcare, and financial inclusion, but also risks widening the digital divide. To ensure equitable...