Spain’s New Bill Mandates Labelling of AI-Generated Content

Government Approves Draft Bill for Mandatory Labelling of AI-Generated Content in Spain

On March 12, 2025, Spain’s central government approved a significant draft bill aimed at regulating the ethical, inclusive, and beneficial use of artificial intelligence (AI). This proposal aligns with existing EU regulations that prioritize responsible AI utilization.

Context of the Regulation

The new law is a response to the evolving landscape of digital rights, as stated by Óscar López, the minister for digital transformation and public administration. He emphasized that this regulation serves to complete a comprehensive European shield of digital rights, protecting citizens in the current digital environment.

“This regulation pioneers at both national and international levels,” López remarked, highlighting its purpose to harmonize national legislation concerning emerging digital tools.

Key Features of the Draft Bill

The draft bill, officially titled the Draft Bill for the Ethical, Inclusive and Beneficial Use of Artificial Intelligence, is currently undergoing an expedited processing procedure. Many of its regulations are already in effect.

Notably, the European AI Act, established in August 2024, has been regulating AI usage by prohibiting systems that employ deceptive practices or manipulate individual behavior. This includes restrictions on:

  • Marketing or commissioning AI systems that exploit vulnerabilities based on age, disability, or social status.
  • Assessing individuals through scoring based on their social behavior or characteristics.
  • Using AI to predict criminal behavior solely based on personal profiles.
  • Expanding facial recognition databases indiscriminately.
  • Inferring emotions in workplace or educational settings.

Labelling AI-Generated Content

A critical aspect of the bill is the mandatory labelling of deepfakes. López stated, “With this draft bill, we identify risks and banned practices, such as subliminal techniques or biometric recognition in public spaces.”

The Spanish authorities tasked with enforcing compliance will include:

  • The Spanish Data Protection Agency for biometric-related cases.
  • The General Judiciary Council for matters linked to the justice system.
  • The Central Electoral Board for AI applications in electoral processes.
  • The Spanish AI Supervision Agency (AESIA) for overseeing other applications.

Penalties for Non-Compliance

The draft bill proposes severe penalties for serious violations, which could range from 7.5 to 35 million euros or between 2% and 7% of the global turnover of the responsible company. Smaller penalties may apply for small and medium-sized enterprises (SMEs).

Serious infringements include failing to adequately label any AI-generated or manipulated content that depicts real or fictional individuals in misleading scenarios—essentially defining the scope of deepfakes.

Controversies Surrounding Deepfake Technology

The bill’s introduction coincides with recent controversies involving deepfake content, particularly a misleading video disseminated by Spain’s Partido Popular (PP). The video, titled La isla de las corrupciones, was withdrawn after public outcry, showcasing images related to alleged corruption practices involving the ruling PSOE party.

In light of these events, this draft bill signifies a proactive approach by the Spanish government to regulate AI technologies and protect citizens from potential abuses associated with deceptive AI practices.

More Insights

Transforming Corporate Governance: The Impact of the EU AI Act

This research project investigates how the EU Artificial Intelligence Act is transforming corporate governance and accountability frameworks, compelling companies to reconfigure responsibilities and...

Harnessing AI for Effective Risk Management

Artificial intelligence is becoming essential for the risk function, helping chief risk officers (CROs) to navigate compliance and data governance challenges. With a growing number of organizations...

Senate Reverses Course on AI Regulation Moratorium

In a surprising turn, the U.S. Senate voted overwhelmingly to eliminate a provision that would have imposed a federal moratorium on state regulations of artificial intelligence for the next decade...

Bridging the 83% Compliance Gap in Pharmaceutical AI Security

The pharmaceutical industry is facing a significant compliance gap regarding AI data security, with only 17% of companies implementing automated controls to protect sensitive information. This lack of...

Transforming Corporate Governance: The Impact of the EU AI Act

This research project investigates how the EU Artificial Intelligence Act is transforming corporate governance and accountability frameworks, compelling companies to reconfigure responsibilities and...

AI-Driven Cybersecurity: Bridging the Accountability Gap

As organizations increasingly adopt AI to drive innovation, they face a dual challenge: while AI enhances cybersecurity measures, it simultaneously facilitates more sophisticated cyberattacks. The...

Thailand’s Comprehensive AI Governance Strategy

Thailand is drafting principles for artificial intelligence (AI) legislation aimed at establishing an AI ecosystem and enhancing user protection from potential risks. The legislation will remove legal...

Texas Implements Groundbreaking AI Regulations in Healthcare

Texas has enacted comprehensive AI governance laws, including the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) and Senate Bill 1188, which establish a framework for responsible AI...

AI Governance: Balancing Innovation and Oversight

Riskonnect has launched its new AI Governance solution, enabling organizations to manage the risks and compliance obligations of AI technologies while fostering innovation. The solution integrates...