First Step in Combating AI-Driven Deepfake Abuse

The First Major US Law to Fight AI Harms and Deepfake Abuse

The Take It Down Act, passed by the House of Representatives on April 28, 2025, represents a significant step in addressing AI-induced harm and the abuse of deepfake technology. This bipartisan bill aims to criminalize non-consensual deepfake pornography and mandates that online platforms remove such content within 48 hours of receiving a notice.

Background and Context

The bill emerged in response to the growing prevalence of AI-created illicit imagery, which has surged alongside advancements in generative AI tools. The legislation is expected to be signed into law by President Trump, demonstrating a rare moment of consensus in a divided Congress.

Support for the bill has been widespread, garnering backing from both conservative and progressive factions. It passed with a remarkable 409-2 vote in the House, a clear signal of its bipartisan appeal.

Championing the Cause

The Take It Down Act was largely inspired by the experiences of two teenagers, Elliston Berry and Francesca Mani, who became victims of deepfake abuse. In October 2023, they discovered that AI software had been used to create fake nude images of them and their classmates, leading to a public outcry for action.

When traditional avenues for redress failed, Senator Ted Cruz took up their cause, drafting the legislation that would ultimately become the Take It Down Act. This initiative represents not just a political victory, but a profound acknowledgment of the trauma faced by victims of deepfake technology.

Legislative Journey

The path to passing the Take It Down Act was fraught with challenges, particularly

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...