AI Transparency Code: Guidelines for Ethical Content Marking

AI Office Publishes Its First Code of Practice on AI‑Generated Content Transparency

The EU AI office has released its first draft Code of Practice focusing on the transparency of AI-generated content. This Code provides voluntary guidelines for marking and labeling AI outputs—be it audio, image, video, or text—ensuring users are aware when content is either AI-made or manipulated, in alignment with the obligations outlined in Article 50 of the AI Act, which will be enforceable starting in August 2026.

Obligations of AI Systems Providers

AI systems providers are tasked with the following:

Marking

Providers must ensure that all outputs are marked in a machine-readable format to indicate whether they have been artificially generated or manipulated. This includes:

  • The requirement for AI-generated or manipulated content to be marked with an imperceptible watermark.
  • Preservation of these marks even when the content is used as input for further transformations.

Detection

To facilitate detection, AI providers must enable users and third parties to verify the authenticity of the content. This includes:

  • Providing a free interface or publicly available detector to check whether the content has been generated or manipulated by their AI systems.
  • Maintaining detection mechanisms throughout the AI system’s lifecycle.
  • Implementing forensic detection mechanisms that do not rely on active AI marking.

Technical solutions offered by providers must be:

  • Effective and computationally efficient.
  • Low cost while ensuring real-time application.
  • Interoperable and robust against common alterations.
  • Reliable, maintaining content quality.

Obligations of AI Systems Deployers

Deployers have complementary obligations that enhance transparency throughout the AI value chain. They must:

  • Disclose AI-generated or manipulated content, especially in cases where the text is published to inform the public (unless reviewed by a human).
  • Identify AI-generated content by using a common icon in a visible and consistent location. An interim icon may be used until a standardized one is developed.

Specific Measures for Deepfake Disclosure

For deepfake video content:

  • Real-time deepfake videos must display the icon non-intrusively during exposure.
  • A disclaimer must be included at the beginning of exposure explaining the content includes deepfake.

For non-real-time deepfake videos, disclaimers can be placed at the beginning or consistently throughout the content. In artistic or creative works, the icon should be displayed for a sufficient duration.

Specific Measures for AI-Generated or Manipulated Text

When publishing AI-generated or manipulated text intended to inform the public, the icon must be displayed in a fixed and clear position, such as:

  • At the top of the text.
  • Beside the text.
  • In the colophon or after the closing sentence.

Disclaimer: The information provided may not be applicable in all situations and should not be acted upon without specific legal advice based on particular circumstances.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...