First Step in Combating AI-Driven Deepfake Abuse

The First Major US Law to Fight AI Harms and Deepfake Abuse

The Take It Down Act, passed by the House of Representatives on April 28, 2025, represents a significant step in addressing AI-induced harm and the abuse of deepfake technology. This bipartisan bill aims to criminalize non-consensual deepfake pornography and mandates that online platforms remove such content within 48 hours of receiving a notice.

Background and Context

The bill emerged in response to the growing prevalence of AI-created illicit imagery, which has surged alongside advancements in generative AI tools. The legislation is expected to be signed into law by President Trump, demonstrating a rare moment of consensus in a divided Congress.

Support for the bill has been widespread, garnering backing from both conservative and progressive factions. It passed with a remarkable 409-2 vote in the House, a clear signal of its bipartisan appeal.

Championing the Cause

The Take It Down Act was largely inspired by the experiences of two teenagers, Elliston Berry and Francesca Mani, who became victims of deepfake abuse. In October 2023, they discovered that AI software had been used to create fake nude images of them and their classmates, leading to a public outcry for action.

When traditional avenues for redress failed, Senator Ted Cruz took up their cause, drafting the legislation that would ultimately become the Take It Down Act. This initiative represents not just a political victory, but a profound acknowledgment of the trauma faced by victims of deepfake technology.

Legislative Journey

The path to passing the Take It Down Act was fraught with challenges, particularly

More Insights

AI Compliance Risks: Safeguarding Against Emerging Threats

The rapid growth of artificial intelligence (AI), particularly generative AI, presents both opportunities and significant risks for businesses regarding compliance with legal and regulatory...

Building Effective AI Literacy Programs for Compliance and Success

The EU AI Act mandates that providers and deployers of AI systems ensure a sufficient level of AI literacy among their staff and others involved in AI operations. This obligation applies to anyone...

Ethics at the Crossroads of AI Innovation

As artificial intelligence (AI) increasingly influences critical decision-making across various sectors, the need for robust ethical governance frameworks becomes essential. Organizations must...

Croatia’s Path to Responsible AI Legislation

EDRi affiliate Politiscope hosted an event in Croatia to discuss the human rights impacts of Artificial Intelligence (AI) and to influence national policy ahead of the implementation of the EU AI Act...

The Legal Dilemma of AI Personhood

As artificial intelligence systems evolve to make decisions and act independently, the legal frameworks that govern them are struggling to keep pace. This raises critical questions about whether AI...

Data Provenance: The Foundation of Effective AI Governance for CISOs

The article emphasizes the critical role of data provenance in ensuring effective AI governance within organizations, highlighting the need for continuous oversight and accountability in AI...

Balancing AI Governance in the Philippines

A lawmaker in the Philippines, Senator Grace Poe, emphasizes the need for a balanced approach in regulating artificial intelligence (AI) to ensure ethical and innovative use of the technology. She...

China’s Open-Source Strategy: Redefining AI Governance

China's advancements in artificial intelligence (AI) are increasingly driven by open-source collaboration among tech giants like Alibaba, Baidu, and Tencent, positioning the country to influence...

Mastering AI Governance: Nine Essential Steps

As organizations increasingly adopt artificial intelligence (AI), it is essential to implement effective AI governance to ensure data integrity, accountability, and security. The nine-point framework...