EU Lawmakers Challenge Deregulation of AI Act Amid Concerns for Safety

EU Lawmakers Oppose Voluntary AI Act Compliance Rules

The European Commission is facing significant opposition from EU lawmakers regarding its proposal to make more requirements of the AI Act voluntary. This move is seen as potentially beneficial to large AI developers such as Google and OpenAI, who would experience a decrease in compliance burdens.

Key Takeaways

  • The European Commission is reportedly considering making parts of the AI Act voluntary.
  • This proposal could favor major AI developers.
  • However, it is encountering pushback from the EU Parliament.

As the European Commission explores deregulation as a means to enhance AI investment, the intention to dilute the requirements of the landmark AI Act has sparked intense debate. Lawmakers from the EU Parliament are firmly opposing this proposal, emphasizing the importance of maintaining stringent standards in AI development.

Deregulation on the Agenda

Since the AI Act was enacted in 2024, the political landscape has shifted dramatically. Originally designed with safety and responsibility in mind, the call for a more laissez-faire approach arises from concerns regarding European competitiveness. Proponents of deregulation argue that easing restrictions could foster innovation and economic growth.

In a significant development, the Commission removed a proposed AI liability directive from its 2025 work program, citing “no foreseeable agreement” on the legislation. This decision aligns with the Commission’s broader agenda to cut bureaucratic red tape and streamline regulations impacting businesses.

U.S. Pressure

The EU’s push for deregulation is not occurring in isolation; it is influenced by external pressures, notably from the U.S. administration. A recent White House memorandum explicitly referenced the EU’s Digital Markets Act and Digital Services Act, indicating a growing apprehension that the AI Act could pose a threat to American businesses.

In this geopolitical context, U.S. tech giants like Google and OpenAI stand to gain significantly from a relaxation of the AI Act. If compliance requirements are made voluntary, these companies could operate under a much lighter regulatory framework.

MEPs Warn Against Weakening the AI Act

Members of the European Parliament (MEPs) who were instrumental in negotiating the AI Act have voiced strong objections to the Commission’s plans. They argue that weakening the Act would be “dangerous” and “undemocratic.” A letter drafted by these MEPs warns that failing to hold AI developers to high standards of safety and security could have severe repercussions for Europe’s economy and democracy.

The hierarchy of EU lawmaking complicates the process, as MEPs have limited power to block the Commission’s proposed changes. However, member states retain significant influence and may push back against deregulation efforts.

Notably, the letter opposing the weakening of the AI Act garnered support from Carme Artigas, a key negotiator on behalf of member states during the Act’s drafting. This coalition may provide a counterbalance to the Commission’s efforts, particularly as countries like France have historically resisted stricter AI regulations.

As the debate continues, the future of the AI Act remains uncertain, with potential implications for the landscape of AI development and regulation in Europe.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...