Laws in Europe Combatting Deepfakes

Which European Countries Have Laws Against Deepfakes?

As artificial intelligence (AI) technology rapidly evolves, the phenomenon of deepfakes poses significant challenges across various domains, particularly in terms of misinformation and privacy rights. Countries across Europe are taking legislative action to combat the misuse of deepfakes, introducing new laws aimed at protecting individuals and ensuring accountability among tech companies.

Denmark’s Approach

Denmark has emerged as a pioneer by enacting a law that grants individuals copyright over their own likenesses. This legislation aims to combat the proliferation of deepfake videos, allowing individuals to control the use of their images and identities in digital media. The Danish government has stated that the bill, agreed upon by all major parties, will criminalize the sharing of deepfakes and other digital imitations of a person’s characteristics.

Culture Minister Jakob Engel-Schmidt emphasized that the law sends a clear message to citizens about their rights regarding their own bodies, voices, and facial features. By labeling deepfakes as a form of misinformation, Denmark seeks to protect its populace from the potential harms of AI-generated content.

The European Union’s AI Act

The European Union is also addressing the issue of deepfakes through its comprehensive AI Act, which categorizes AI-generated content into four risk levels: minimal, limited, high, and unacceptable. Deepfakes fall under the limited risk category, which necessitates compliance with transparency regulations.

Under the AI Act, companies that produce deepfake content must label such material, ensuring that viewers are aware of its AI-generated nature. This includes the implementation of watermarks on videos and disclosures regarding the training datasets used in creating these models. Violators of these regulations could face hefty fines—up to €15 million or 3% of global turnover—in cases of non-compliance, with penalties potentially escalating to €35 million or 7% for more severe breaches.

Directive on Violence Against Women

The EU is also tackling deepfakes through its directive on violence against women, which criminalizes the non-consensual creation and distribution of digitally manipulated sexual content. This directive explicitly prohibits the production of deepfakes depicting individuals in sexual scenarios without their consent. While it does not stipulate specific penalties, it allows Member States to determine appropriate sanctions, reflecting a broader commitment to combating gender-based violence and protecting personal dignity.

France’s Digital Spaces Law

In 2024, France updated its criminal code to prohibit the sharing of AI-generated content, including deepfakes, without the consent of the individual portrayed. This law mandates that any reshared content must be clearly identified as AI-generated. Violators of this law may face severe penalties, including up to one year in prison and a €15,000 fine, with increased penalties for online distribution of such content.

Furthermore, France has instituted a specific ban on pornographic deepfakes, with violators facing up to three years in prison and a €75,000 fine. The law also empowers France’s audiovisual regulator, Arcom, to enforce compliance by ensuring platforms remove illicit content effectively.

The UK’s Legislative Measures

The United Kingdom has introduced multiple laws targeting deepfake pornography, including amendments to the Data (Use and Access) Bill. This legislation specifically addresses the creation of deepfake images intended for sexual gratification or to cause distress. Offenders can face an unlimited fine or imprisonment for up to two years under the Sexual Offenses Act for creating such content.

Additionally, the UK’s Online Safety Act mandates that platforms proactively remove non-consensual sexual images, including deepfakes, from their sites. Companies that fail to comply with these regulations risk fines of up to 10% of their global revenue, reflecting a serious commitment to combating digital abuse.

Despite these measures, experts caution that the UK’s laws do not fully criminalize the creation of deepfakes, leaving victims vulnerable to harm even if the content is not publicly shared. Legal scholars argue that further action is needed to regulate the development and availability of AI tools capable of creating deepfakes.

Conclusion

As deepfake technology continues to evolve, European nations are progressively enacting laws aimed at safeguarding individual rights and promoting responsible AI usage. Driven by a recognition of the potential dangers posed by deepfakes, these legal frameworks seek to balance innovation with accountability while addressing the urgent need to protect citizens from the misuse of AI technologies.

More Insights

New Code of Practice for AI Compliance Set for 2025

The European Commission announced that a code of practice to help companies comply with the EU's artificial intelligence rules may only be implemented by the end of 2025. This delay follows calls from...

AI Governance: The Key to Successful Enterprise Implementation

Artificial intelligence is at a critical juncture, with many enterprise AI initiatives failing to reach production and exposing organizations to significant risks. Effective AI governance is essential...

AI Code Compliance: Companies May Get a Grace Period

The commission is considering providing a grace period for companies that agree to comply with the new AI Code. This initiative aims to facilitate a smoother transition for businesses adapting to the...

Texas Enacts Groundbreaking AI Governance Law

On June 22, 2025, Texas enacted the Responsible Artificial Intelligence Governance Act, making it the second state to implement comprehensive AI legislation. The act establishes a framework for the...

Texas Enacts Groundbreaking AI Governance Law

On June 22, 2025, Texas enacted the Responsible Artificial Intelligence Governance Act, making it the second state to implement comprehensive AI legislation. The act establishes a framework for the...

Laws in Europe Combatting Deepfakes

Denmark has introduced a law that grants individuals copyright over their likenesses to combat deepfakes, making it illegal to share such content without consent. Other European countries are also...

A Strategic Approach to Ethical AI Implementation

The federal government aims to enhance productivity by implementing artificial intelligence (AI) across various sectors, but emphasizes the importance of thoughtful deployment to avoid wasting public...

Navigating AI Regulation: A New Era for Insurance Compliance

On July 1, 2025, the U.S. Senate voted to reject a proposed ten-year moratorium on state-level AI regulation, allowing individual states to legislate independently. This decision creates a fragmented...

Navigating AI Regulation: A New Era for Insurance Compliance

On July 1, 2025, the U.S. Senate voted to reject a proposed ten-year moratorium on state-level AI regulation, allowing individual states to legislate independently. This decision creates a fragmented...