Spain’s Bold Move: Stricter Regulations on AI and Deepfakes

Spain Approves AI Bill With Heavy Fines for Unlabeled Deepfakes

Spain has become one of the first member countries of the European Union to implement its AI Act, approving a new AI bill that outlines significant penalties for unlabeled AI-generated content, particularly deepfakes.

The bill, which addresses various malicious AI practices, specifies that the failure to properly label any AI-generated or manipulated content that depicts real or nonexistent individuals is considered a serious infringement. According to Spain’s Ministry of Digital Transformation, such content must be identified as AI-generated “in a clear and distinguishable manner no later than the time of the first interaction or exposure,” aligning with the regulations set forth in the EU AI Act.

EU AI Act Overview

Initially introduced in 2021, the EU AI Act underwent extensive deliberation before being passed in March 2024. It came into force in member states in August 2024 and will be fully applicable by August 2, 2026, allowing entities time to comply with its requirements.

The Act establishes risk-based rules that AI developers and deployers must follow regarding specific AI applications and prohibits the commercialization of certain AI uses. One notable provision prohibits the use of biometric data for training algorithms utilized in criminal profiling by law enforcement, marking a significant step towards minimizing bias in AI technologies. However, exemptions for national security and border control agencies have raised concerns about balancing security and individual rights.

Key Provisions of the Spanish AI Bill

The Spanish AI bill stipulates fines of up to 35 million euros or 7% of a company’s global annual turnover for improper labeling of AI content. Additionally, fines may be imposed for the failure to implement human supervision of AI systems that use biometrics for various industrial applications, with penalties ranging from €500,000 to €7.5 million or between 1% and 2% of global turnover.

AI and Deepfake Regulation in India

In contrast, AI regulation in India remains inconsistent, with the government oscillating between establishing a comprehensive AI framework and maintaining a light-touch regulatory approach. Recently, the Indian government proposed the creation of an AI Governance Board to oversee and authorize AI applications, but deepfake regulation has not been adequately addressed.

During the last general and state elections, numerous political figures and parties misused deepfakes for misinformation campaigns. In response, the Indian government reiterated the existing IT rules, with some officials suggesting that legislation regarding deepfakes may emerge if deemed necessary.

This development in Spain highlights the increasing global scrutiny and regulatory efforts aimed at managing the implications of AI technology, particularly concerning deepfake content and its potential misuse in various contexts.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...