Europe’s New AI Act: Banning Nudification Tools and Strengthening Data Privacy

Europe Moves to Ban AI Nudification Tools Under Updated EU AI Act

The European Union is advancing its efforts to refine the landmark EU AI Act, with the European Council proposing new amendments aimed at simplifying regulations while addressing emerging risks associated with artificial intelligence.

Proposed Amendments

On Friday, the Council released its position on updates to the EU AI Act, which includes a new ban on AI nudification tools and stricter standards regarding the use of sensitive personal data. This proposal is part of the broader “Omnibus VII” legislative package designed to streamline the EU’s digital regulatory framework and reduce compliance burdens for businesses.

While these changes aim to make the rules more practical for companies, they also reflect growing concerns about the misuse of AI technologies and the need for stronger safeguards.

Targeting Harmful AI Content

One of the most significant changes proposed under the updated EU AI Act is a new prohibition targeting AI tools capable of generating non-consensual sexual or intimate imagery. According to the Council, the new provision explicitly bans “AI practices regarding the generation of non-consensual sexual and intimate content or child sexual abuse material.” This move comes as regulators across Europe increasingly confront the real-world harms caused by AI-generated deepfake content.

The proposal follows a similar step earlier this week when members of the European Parliament approved their version of the ban. The alignment between the two bodies suggests that restrictions on AI nudification tools are likely to remain in the final version of the EU AI Act once negotiations conclude.

Background on the Need for Regulation

The push for stricter rules was underscored by a high-profile incident involving the Grok chatbot developed by xAI, which reportedly generated millions of non-consensual intimate images that spread rapidly online, triggering widespread backlash. In response, the European Commission launched a formal investigation into the platform and its AI features earlier this year.

This incident highlighted the speed at which generative AI tools can produce and distribute harmful content, reinforcing the necessity for the EU AI Act to include mechanisms to address such risks.

Changes to High-Risk AI System Regulations

Alongside the prohibition, the proposed reforms adjust the timeline for implementing rules on high-risk AI systems, a key component of the EU AI Act. The European Commission initially suggested delaying these rules’ implementation by up to 16 months to allow regulators time to develop the necessary technical standards and tools for effective enforcement.

Under the Council’s proposal, the revised deadlines would be:

  • 2 December 2027 for stand-alone high-risk AI systems
  • 2 August 2028 for high-risk AI systems embedded in products

These extensions aim to provide organizations with clearer guidance and adequate preparation time while ensuring that the regulatory framework remains enforceable.

Stronger Safeguards for Sensitive Data

Another key amendment focuses on how organizations process sensitive personal data when developing or testing AI systems. The Council’s proposal reinstates the “strict necessity” standard for using special categories of personal data in bias detection and correction processes. Organizations must clearly justify why such data is required before using it to enhance algorithmic fairness.

This change reflects ongoing debates within Europe about balancing innovation with strong privacy protections, particularly as AI systems increasingly rely on large datasets.

Additionally, the updated EU AI Act proposal postpones the deadline for establishing national AI regulatory sandboxes until December 2027. These sandboxes are intended to allow companies to test AI technologies in controlled environments under regulatory supervision.

Simplifying Rules Without Weakening Oversight

The broader objective behind the proposed amendments is to simplify the complex network of digital regulations affecting businesses across the EU. As part of the Digital Omnibus initiative, the European Commission has been working to reduce administrative burdens while improving the consistency of AI rules across member states.

Marilena Raouna, Deputy Minister for European Affairs of the Republic of Cyprus, emphasized the importance of balancing innovation with regulatory clarity. “Streamlining the AI rules is essential for ensuring the EU’s digital sovereignty,” she stated, highlighting the urgency in reaching an agreement to facilitate the timely application of the AI act.

What Happens Next for the EU AI Act

With the Council now formally adopting its negotiating position, discussions will move to the next stage. The proposal will be negotiated with the European Parliament to finalize the updated framework.

While the process may still involve revisions, the latest developments signal that Europe remains committed to shaping global AI governance through the EU AI Act—balancing innovation, business competitiveness, and safeguards against emerging technological risks.

As generative AI tools continue to evolve rapidly, the debate surrounding their regulation is far from over. However, the Council’s latest proposal makes it clear: Europe is determined to tighten protections where AI misuse threatens privacy, safety, and trust in digital technologies.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...