AI Transparency Talks Zero in on ‘Deceitful Intent’
Proposed rules for labeling AI deepfakes, being developed in a transparency code linked to the AI Act, are dividing experts involved in the EU’s drafting process.
Under the bloc’s AI law, companies must ensure that AI-generated content – images, video, audio – is watermarked, disclosing the role of AI in its creation. If an AI system is used to create a deepfake, such as a video apparently showing a real person talking, it must be marked as synthetic to avoid the risk of deception.
Challenges in Establishing Clear Rules
While the principle sounds simple, drawing up clear rules for such labeling is fraught with fine-grained decisions.
A fight is brewing between stakeholders from industry and civil society groups over where the EU should draw the line regarding labeling content that is AI-enhanced or generated. The aim of the code of practice is to provide companies with practical advice on how to comply with the AI Act’s transparency rules, originally set to enter into force in August 2026.
Leaning Towards Civil Society
A first draft of the code, published before Christmas, appears to lean towards civil society’s direction, with independent experts chairing the process backing AI labeling for even “seemingly small” edits that alter the context of content – such as using AI tools to remove noise from an audio recording, making it appear as if a person was interviewed in a different setting.
Industry sources told Euractiv they oppose such all-encompassing watermarking rules, arguing that they would lead to labels appearing everywhere, thereby diluting their warning effect.
Certain industries – such as advertising – could be especially affected, they also worry.
Consideration of Deceitful Intent
The key question in recent talks on the code has been whether deceitful intent should be taken into account when determining whether content must be labelled, several sources told Euractiv. The AI Act itself only broadly refers to “artificially generated or manipulated” content.
Voluntary Compliance and Stakeholder Input
The brewing transparency code is the second such AI Act guidance – following last year’s much-lobbied code for general-purpose AI models (GPAIs).
As with the GPAI code, the final code will be voluntary: companies can choose whether to sign up. But those that do are likely to be viewed as aligned with best practice, gaining advantages towards any formal AI Act compliance assessment.
Independent experts are chairing the process of writing these codes, with stakeholders including industry and civil society giving input – sometimes resulting in disagreements, as seems to be the case for the transparency code drafting process.
Watermarking Obligations
Similar discussions are taking place regarding separate rules for AI systems to apply a machine-readable watermark to content they generate. This applies to all AI content, not just deepfakes, and puts obligations on developers of AI systems, rather than solely deployers.
The AI Act specifically mentions that this watermarking should include AI-generated text. The first draft of the transparency code mentions (software) code as a specific type of AI-generated text – a step that industry sources viewed skeptically, claiming that watermarking code would reduce quality and questioning why such a step is necessary.
Shifting Timelines for Compliance
Whatever the final shape of the transparency code, companies will probably get more time to comply with the AI Act’s watermarking rules: The Commission has proposed pushing their application back to February 2027 for AI systems released before August 2026.
Additionally, the final answers to questions being raised in stakeholder discussions on the transparency code might await a separate document from the Commission.
In theory, the transparency code is intended to give practical details on implementing the EU’s rules – the Commission is itself working on separate guidelines addressing the rules’ scope and legal definitions.
This upcoming Commission document could clarify key terms relevant to the transparency code discussions. However, it is expected only by June, around the time the code of practice is supposed to be finalized.