AI Transparency: Addressing Deceit in Deepfake Regulations

AI Transparency Talks Zero in on ‘Deceitful Intent’

Proposed rules for labeling AI deepfakes, being developed in a transparency code linked to the AI Act, are dividing experts involved in the EU’s drafting process.

Under the bloc’s AI law, companies must ensure that AI-generated content – images, video, audio – is watermarked, disclosing the role of AI in its creation. If an AI system is used to create a deepfake, such as a video apparently showing a real person talking, it must be marked as synthetic to avoid the risk of deception.

Challenges in Establishing Clear Rules

While the principle sounds simple, drawing up clear rules for such labeling is fraught with fine-grained decisions.

A fight is brewing between stakeholders from industry and civil society groups over where the EU should draw the line regarding labeling content that is AI-enhanced or generated. The aim of the code of practice is to provide companies with practical advice on how to comply with the AI Act’s transparency rules, originally set to enter into force in August 2026.

Leaning Towards Civil Society

A first draft of the code, published before Christmas, appears to lean towards civil society’s direction, with independent experts chairing the process backing AI labeling for even “seemingly small” edits that alter the context of content – such as using AI tools to remove noise from an audio recording, making it appear as if a person was interviewed in a different setting.

Industry sources told Euractiv they oppose such all-encompassing watermarking rules, arguing that they would lead to labels appearing everywhere, thereby diluting their warning effect.

Certain industries – such as advertising – could be especially affected, they also worry.

Consideration of Deceitful Intent

The key question in recent talks on the code has been whether deceitful intent should be taken into account when determining whether content must be labelled, several sources told Euractiv. The AI Act itself only broadly refers to “artificially generated or manipulated” content.

Voluntary Compliance and Stakeholder Input

The brewing transparency code is the second such AI Act guidance – following last year’s much-lobbied code for general-purpose AI models (GPAIs).

As with the GPAI code, the final code will be voluntary: companies can choose whether to sign up. But those that do are likely to be viewed as aligned with best practice, gaining advantages towards any formal AI Act compliance assessment.

Independent experts are chairing the process of writing these codes, with stakeholders including industry and civil society giving input – sometimes resulting in disagreements, as seems to be the case for the transparency code drafting process.

Watermarking Obligations

Similar discussions are taking place regarding separate rules for AI systems to apply a machine-readable watermark to content they generate. This applies to all AI content, not just deepfakes, and puts obligations on developers of AI systems, rather than solely deployers.

The AI Act specifically mentions that this watermarking should include AI-generated text. The first draft of the transparency code mentions (software) code as a specific type of AI-generated text – a step that industry sources viewed skeptically, claiming that watermarking code would reduce quality and questioning why such a step is necessary.

Shifting Timelines for Compliance

Whatever the final shape of the transparency code, companies will probably get more time to comply with the AI Act’s watermarking rules: The Commission has proposed pushing their application back to February 2027 for AI systems released before August 2026.

Additionally, the final answers to questions being raised in stakeholder discussions on the transparency code might await a separate document from the Commission.

In theory, the transparency code is intended to give practical details on implementing the EU’s rules – the Commission is itself working on separate guidelines addressing the rules’ scope and legal definitions.

This upcoming Commission document could clarify key terms relevant to the transparency code discussions. However, it is expected only by June, around the time the code of practice is supposed to be finalized.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...