YouTube Backs No Fakes Act to Combat Unauthorized AI Replicas

YouTube’s Support for the ‘No Fakes Act’

YouTube has recently announced its backing for the No Fakes Act, a legislative measure that aims to address the growing concerns surrounding unauthorized AI replicas. This initiative is spearheaded by Senators Chris Coons (D-DE) and Marsha Blackburn (R-TN), who are reintroducing the bill, officially titled the Nurture Originals, Foster Art, and Keep Entertainment Safe Act, or NO FAKES Act.

Overview of the Act

The NO FAKES Act seeks to standardize the regulations related to the use of AI-generated copies of an individual’s likeness, including their faces, names, and voices. The legislation aims to empower individuals by giving them the authority to notify platforms like YouTube when they believe their likeness has been used without consent.

This bill is not a new concept, as it has been previously introduced in 2023 and 2024. However, the current iteration has gained significant momentum with the endorsement of a major platform: YouTube.

YouTube’s Position

In a statement, YouTube emphasized the importance of finding a balance between protecting individuals’ rights and fostering innovation. The platform has stated that the act “focuses on the best ways to balance protection with innovation: putting power directly in the hands of individuals to notify platforms of AI-generated likenesses they believe should come down.”

With YouTube’s support, the bill has garnered additional backing from organizations such as SAG-AFTRA and the Recording Industry Association. However, the legislation has faced resistance from civil liberties groups, notably the Electronic Frontier Foundation (EFF), which has criticized prior drafts of the bill for being overly broad.

Legal Implications

The 2024 version of the bill stipulates that online services, including YouTube, cannot be held liable for hosting unauthorized digital replicas if they promptly remove such content after receiving a notice. This exemption is crucial for platforms that serve as intermediaries for user-generated content.

Another key provision is that services explicitly designed for creating deepfakes could still face liability, highlighting the need for platforms to ensure compliance with the new regulations.

Free Speech and Liability Concerns

During a press conference announcing the reintroduction of the bill, Senator Coons mentioned that part of the updated legislation included addressing concerns regarding free speech and establishing caps for liability, which aim to protect platforms while safeguarding individual rights.

Additional Legislative Support

YouTube has also expressed its support for the Take It Down Act, which aims to criminalize the publication of non-consensual intimate images, including those generated by AI. This act would also require social media platforms to implement quick removal processes for such images upon reporting.

This provision has been met with significant opposition from civil liberties organizations, as well as some groups focusing on non-consensual intimate image (NCII) issues. Despite this pushback, the Take It Down Act has made significant progress, having passed the Senate and advanced out of a House committee.

Technological Initiatives

In conjunction with legislative efforts, YouTube has announced an expansion of its pilot program for likeness management technology. This technology was initially introduced in collaboration with CAA to help creators detect unauthorized AI copies of themselves and request their removal.

Notable creators, such as MrBeast, Mark Rober, and Marques Brownlee, are now participating in this pilot program, showcasing YouTube’s commitment to protecting the rights and likenesses of its content creators.

This comprehensive approach underscores the necessity of safeguarding individual rights in the rapidly evolving landscape of digital technology, particularly as artificial intelligence continues to advance.

More Insights

China’s AI Content Labeling: Key Compliance Insights for Businesses

China has implemented new AI labeling rules that require clear identification of AI-generated content across various media when distributed on Chinese platforms. Companies must adapt their content...

Building Secure and Ethical AI in an Evolving Threat Landscape

Sam Peters, Chief Product Officer at ISMS.online, discusses the importance of building secure and ethical AI models in a rapidly evolving threat landscape, emphasizing that compliance must be the...

AI Recruitment Compliance: Key Insights for Employers in Bulgaria and the EU

Artificial intelligence is increasingly influencing recruitment practices, offering a data-driven approach that can streamline hiring processes and reduce human bias. However, the use of AI also...

EU AI Act: Setting the Standard for Global Super AI Regulation

The EU AI Act pioneers global super AI regulation through its risk-based framework, categorizing AI systems by their potential harm and implementing tailored controls to protect society. By focusing...

Classifying Your AI System Under the EU AI Act Made Easy

The EU AI Act categorizes AI systems into four risk levels: Unacceptable, High-risk, Limited, and Minimal. Genbounty offers a free Risk Classification Wizard to help teams quickly determine their...

AI Legislation: Bridging Global Gaps at AIPPI 2025

The AIPPI 2025 congress in Yokohama will address crucial topics in AI law, such as artificial intelligence and copyright, compulsory licenses, and exhaustion of trademark rights. AIPPI president...

Colorado’s AI Act: New Compliance Challenges for Businesses

Last week, Colorado lawmakers decided to delay the implementation of the Colorado Artificial Intelligence Act (CAIA) until June 30, 2026, extending the timeline for businesses to prepare. The CAIA...

AI Surveillance: Ensuring Safety Without Sacrificing Privacy

AI-driven surveillance enhances safety through advanced technologies like facial recognition and behavior analysis, but it poses significant risks to privacy, civil liberties, and social equity. As...

Responsible AI in Finance: From Theory to Practice

The global discussion around artificial intelligence in finance has shifted towards responsible usage, emphasizing the importance of trust, compliance, and education. Startups like WNSTN AI are...