AI Office Publishes Its First Code of Practice on AI‑Generated Content Transparency
The EU AI office has released its first draft Code of Practice focusing on the transparency of AI-generated content. This Code provides voluntary guidelines for marking and labeling AI outputs—be it audio, image, video, or text—ensuring users are aware when content is either AI-made or manipulated, in alignment with the obligations outlined in Article 50 of the AI Act, which will be enforceable starting in August 2026.
Obligations of AI Systems Providers
AI systems providers are tasked with the following:
Marking
Providers must ensure that all outputs are marked in a machine-readable format to indicate whether they have been artificially generated or manipulated. This includes:
- The requirement for AI-generated or manipulated content to be marked with an imperceptible watermark.
- Preservation of these marks even when the content is used as input for further transformations.
Detection
To facilitate detection, AI providers must enable users and third parties to verify the authenticity of the content. This includes:
- Providing a free interface or publicly available detector to check whether the content has been generated or manipulated by their AI systems.
- Maintaining detection mechanisms throughout the AI system’s lifecycle.
- Implementing forensic detection mechanisms that do not rely on active AI marking.
Technical solutions offered by providers must be:
- Effective and computationally efficient.
- Low cost while ensuring real-time application.
- Interoperable and robust against common alterations.
- Reliable, maintaining content quality.
Obligations of AI Systems Deployers
Deployers have complementary obligations that enhance transparency throughout the AI value chain. They must:
- Disclose AI-generated or manipulated content, especially in cases where the text is published to inform the public (unless reviewed by a human).
- Identify AI-generated content by using a common icon in a visible and consistent location. An interim icon may be used until a standardized one is developed.
Specific Measures for Deepfake Disclosure
For deepfake video content:
- Real-time deepfake videos must display the icon non-intrusively during exposure.
- A disclaimer must be included at the beginning of exposure explaining the content includes deepfake.
For non-real-time deepfake videos, disclaimers can be placed at the beginning or consistently throughout the content. In artistic or creative works, the icon should be displayed for a sufficient duration.
Specific Measures for AI-Generated or Manipulated Text
When publishing AI-generated or manipulated text intended to inform the public, the icon must be displayed in a fixed and clear position, such as:
- At the top of the text.
- Beside the text.
- In the colophon or after the closing sentence.
Disclaimer: The information provided may not be applicable in all situations and should not be acted upon without specific legal advice based on particular circumstances.