Deregulation Threatens the Future of the AI Act

Europe’s Deregulatory Turn Puts the AI Act at Risk

In August 2024, the EU’s Artificial Intelligence Act was adopted following years of intense negotiations, numerous trilogues, and countless amendments. However, just a month later, it became a target for criticism. Former European Central Bank President Mario Draghi identified it as a regulatory barrier detrimental to the tech sector in his report on European competitiveness.

The European Commission soon announced its intention to pursue five simplification initiatives, including an assessment of the EU digital rulebook to ensure it meets the needs of businesses, particularly small and medium-sized enterprises (SMEs). Among the significant targets for this assessment was the AI Act, alongside the General Data Protection Regulation (GDPR).

Concerns regarding potential weakening of the AI Act heightened following statements made at the French AI Summit. Commission President Ursula Von der Leyen promised to cut red tape, leading to the withdrawal of the proposed AI Liability Directive from the Commission’s work program for 2025. This prompted backlash from civil society and members of the European Parliament.

Despite some reassurance from the European Commission’s AI Continent Action Plan, which focused on providing guidance rather than revisiting the AI Act, the writing was on the wall. The consultation linked to the plan invited stakeholders to propose measures for simplifying the AI Act, signaling a potential shift towards deregulation.

Simplification is a Dangerous Misnomer

The European Commission indicated that the primary focus of simplification would be on the reporting obligations under the AI Act. While the Commission claimed that changes would be targeted, there is a risk that even minor adjustments could have significant implications for AI safety. A key reporting obligation involves notifying authorities about serious incidents that may lead to harm or infringement of fundamental rights.

The AI Act’s reporting requirements are crucial for identifying and mitigating real harms associated with AI technologies. Concerns arise when these obligations, which were already perceived as insufficient, are proposed for simplification. For instance, providers can opt-out of high-risk categorization without notifying regulators, creating loopholes that may hinder accountability.

The Intersection with Existing Laws

Proponents of simplification often argue that the obligations of the AI Act overlap with those in other EU regulations, particularly the GDPR. However, this line of reasoning has been challenged, as the AI Act specifically addresses these intersections and clarifies how to comply with both frameworks.

Even if some aspects of overlap could be better managed, the AI Act provides a pathway through guidelines on its relationship with other EU laws, which are still under development. Any calls for changes under the guise of simplification beyond necessary clarifications should be approached with caution.

Evidence-Based Rule-Making Must Remain Central

While the specifics of any proposed amendments to the AI Act remain undefined, the prospect of reopening a largely unenforceable law raises alarms, particularly in light of industry calls for deregulation. A recent document titled the EU Economic Blueprint from OpenAI criticized EU regulations as impediments to innovation, urging policymakers to identify which rules should be preserved or discarded.

The EU AI Champions Initiative, supporting a significant investment in AI innovation, echoed similar sentiments regarding the AI Act’s market uncertainty due to unclear risk categorization. Such arguments exert pressure on the European Commission to adopt a broad-brush approach in their review, posing risks of substantial deregulation.

The first simplification initiative introduced this year serves as a cautionary tale. The first omnibus package, initially aimed at reducing overlapping obligations, resulted in significant dilution of legal frameworks without adequate consultation. This process has faced criticism from environmental NGOs, highlighting the risks of prioritizing industry interests over individual rights.

The European Commission must learn from these experiences and ensure that any review of the AI Act is rooted in evidence and broadly consulted. As the AI Act faces increasing threats, decision-makers should prioritize robust assessments of any proposed amendments to protect the core strengths of the EU digital rulebook and the fundamental rights secured within the AI Act.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...