Spain’s New Bill Mandates Labelling of AI-Generated Content

Government Approves Draft Bill for Mandatory Labelling of AI-Generated Content in Spain

On March 12, 2025, Spain’s central government approved a significant draft bill aimed at regulating the ethical, inclusive, and beneficial use of artificial intelligence (AI). This proposal aligns with existing EU regulations that prioritize responsible AI utilization.

Context of the Regulation

The new law is a response to the evolving landscape of digital rights, as stated by Óscar López, the minister for digital transformation and public administration. He emphasized that this regulation serves to complete a comprehensive European shield of digital rights, protecting citizens in the current digital environment.

“This regulation pioneers at both national and international levels,” López remarked, highlighting its purpose to harmonize national legislation concerning emerging digital tools.

Key Features of the Draft Bill

The draft bill, officially titled the Draft Bill for the Ethical, Inclusive and Beneficial Use of Artificial Intelligence, is currently undergoing an expedited processing procedure. Many of its regulations are already in effect.

Notably, the European AI Act, established in August 2024, has been regulating AI usage by prohibiting systems that employ deceptive practices or manipulate individual behavior. This includes restrictions on:

  • Marketing or commissioning AI systems that exploit vulnerabilities based on age, disability, or social status.
  • Assessing individuals through scoring based on their social behavior or characteristics.
  • Using AI to predict criminal behavior solely based on personal profiles.
  • Expanding facial recognition databases indiscriminately.
  • Inferring emotions in workplace or educational settings.

Labelling AI-Generated Content

A critical aspect of the bill is the mandatory labelling of deepfakes. López stated, “With this draft bill, we identify risks and banned practices, such as subliminal techniques or biometric recognition in public spaces.”

The Spanish authorities tasked with enforcing compliance will include:

  • The Spanish Data Protection Agency for biometric-related cases.
  • The General Judiciary Council for matters linked to the justice system.
  • The Central Electoral Board for AI applications in electoral processes.
  • The Spanish AI Supervision Agency (AESIA) for overseeing other applications.

Penalties for Non-Compliance

The draft bill proposes severe penalties for serious violations, which could range from 7.5 to 35 million euros or between 2% and 7% of the global turnover of the responsible company. Smaller penalties may apply for small and medium-sized enterprises (SMEs).

Serious infringements include failing to adequately label any AI-generated or manipulated content that depicts real or fictional individuals in misleading scenarios—essentially defining the scope of deepfakes.

Controversies Surrounding Deepfake Technology

The bill’s introduction coincides with recent controversies involving deepfake content, particularly a misleading video disseminated by Spain’s Partido Popular (PP). The video, titled La isla de las corrupciones, was withdrawn after public outcry, showcasing images related to alleged corruption practices involving the ruling PSOE party.

In light of these events, this draft bill signifies a proactive approach by the Spanish government to regulate AI technologies and protect citizens from potential abuses associated with deceptive AI practices.

More Insights

AI Governance: Essential Insights for Tech and Security Professionals

Artificial intelligence (AI) is significantly impacting various business domains, including cybersecurity, with many organizations adopting generative AI for security purposes. As AI governance...

Government Under Fire for Rapid Facial Recognition Adoption

The UK government has faced criticism for the rapid rollout of facial recognition technology without establishing a comprehensive legal framework. Concerns have been raised about privacy...

AI Governance Start-Ups Surge Amid Growing Demand for Ethical Solutions

As the demand for AI technologies surges, so does the need for governance solutions to ensure they operate ethically and securely. The global AI governance industry is projected to grow significantly...

10-Year Ban on State AI Laws: Implications and Insights

The US House of Representatives has approved a budget package that includes a 10-year moratorium on enforcing state AI laws, which has sparked varying opinions among experts. Many argue that this...

AI in the Courts: Insights from 500 Cases

Courts around the world are already regulating artificial intelligence (AI) through various disputes involving automated decisions and data processing. The AI on Trial project highlights 500 cases...

Bridging the Gap in Responsible AI Implementation

Responsible AI is becoming a critical business necessity, especially as companies in the Asia-Pacific region face rising risks associated with emergent AI technologies. While nearly half of APAC...

Leading AI Governance: The Legal Imperative for Safe Innovation

In a recent interview, Brooke Johnson, Chief Legal Counsel at Ivanti, emphasizes the critical role of legal teams in AI governance, advocating for cross-functional collaboration to ensure safe and...

AI Regulations: Balancing Innovation and Safety

The recent passage of the One Big Beautiful Bill Act by the House of Representatives includes a provision that would prevent states from regulating artificial intelligence for ten years. This has...

Balancing Compliance and Innovation in Financial Services

Financial services companies face challenges in navigating rapidly evolving AI regulations that differ by jurisdiction, which can hinder innovation. The need for compliance is critical, as any misstep...