Spain’s New Law Targets Unlabelled AI-Generated Content with Hefty Fines

Spain’s Legislation on AI-Generated Content

On March 11, 2025, Spain’s government approved a bill aimed at imposing significant penalties on companies that fail to properly label content generated by artificial intelligence (AI). This legislation is part of a broader effort to combat the misuse of AI technologies, particularly in relation to deepfakes.

Key Provisions of the Bill

The newly proposed legislation sets forth fines of up to €35 million (approximately $38 million) or 7% of a company’s global annual turnover for non-compliance with labeling requirements. Non-compliance is categorized as a serious offence, highlighting the government’s commitment to transparency and accountability in the use of AI.

Alignment with EU Regulations

This bill aligns closely with the European Union’s AI Act, which establishes strict transparency obligations for AI systems deemed high-risk. Digital Transformation Minister Oscar Lopez emphasized that AI, while a powerful tool for societal improvement, also poses risks by enabling the spread of misinformation and undermining democratic processes.

Focus on Transparency and Safety

The legislation aims to enhance the transparency of AI-generated content and targets harmful practices. For instance, the bill directly addresses the use of subliminal techniques—subtle sounds and images intended to manipulate vulnerable groups. Minister Lopez cited instances where chatbots could exploit individuals with gambling addictions or toys might encourage children to engage in dangerous activities.

Oversight and Enforcement

The enforcement of these new regulations will fall under the jurisdiction of a newly established AI supervisory agency, known as AESIA. This agency will oversee the implementation of the rules, except in cases related to data privacy, crime, elections, and other specific sectors, which will be managed by relevant regulatory bodies.

Broader Implications for AI Use

Spain’s proactive stance on AI regulation comes in the wake of widespread concerns regarding the societal implications of AI technologies. Since the launch of ChatGPT in late 2022, regulators have prioritized ensuring that AI systems do not harm society, marking a significant shift in how governments approach technology governance.

As AI-generated content becomes increasingly prevalent, Spain’s legislative measures could serve as a model for other nations grappling with similar challenges. The combination of stringent penalties and a focus on transparency may pave the way for a more responsible and ethical use of AI technologies in the future.

More Insights

Tariffs and the EU AI Act: Impacts on the Future of AI Innovation

The article discusses the complex impact of tariffs and the EU AI Act on the advancement of AI and automation, highlighting how tariffs can both hinder and potentially catalyze innovation. It...

Europe’s Ambitious AI Sovereignty Action Plan

The European Commission has unveiled its AI Continent Action Plan, a comprehensive strategy aimed at establishing Europe as a leader in artificial intelligence. This plan emphasizes investment in AI...

Balancing Innovation and Regulation in Singapore’s AI Landscape

Singapore is unveiling its National AI Strategy 2.0, positioning itself as an innovator and regulator in the field of artificial intelligence. However, challenges such as data privacy and AI bias loom...

Ethical AI Strategies for Financial Innovation

Lexy Kassan discusses the essential components of responsible AI, emphasizing the need for regulatory compliance and ethical implementation within the FinTech sector. She highlights the EU AI Act's...

Empowering Humanity Through Ethical AI

Human-Centered AI (HCAI) emphasizes the design of AI systems that prioritize human values, well-being, and trust, acting as augmentative tools rather than replacements. This approach is crucial for...

AI Safeguards: A Step-by-Step Guide to Building Robust Defenses

As AI becomes more powerful, protecting against its misuse is critical. This requires well-designed "safeguards" – technical and procedural interventions to prevent harmful outcomes. Research outlines...

EU AI Act: Pioneering Regulation for a Safer AI Future

The EU AI Act, introduced as the world's first major regulatory framework for artificial intelligence, aims to create a uniform legal regime across all EU member states while ensuring citizen safety...

EU’s Ambitious AI Continent Action Plan Unveiled

On April 9, 2025, the European Commission adopted the AI Continent Action Plan, aiming to transform the EU into a global leader in AI by fostering innovation and ensuring trustworthy AI. The plan...

Updated AI Contractual Clauses: A New Framework for Public Procurement

The EU's Community of Practice on Public Procurement of AI has published updated non-binding AI Model Contractual Clauses (MCC-AI) to assist public organizations in procuring AI systems. These...