Legal Challenges of Advancing Generative AI in Entertainment

The Generative Slate: As Digital Replicas Improve, Legal Issues Grow

This article explores the use of generative AI in the production and distribution of content, highlighting the rapid advancements and the accompanying legal challenges.

The Rise of Generative AI

Life may imitate art, but AI does a remarkable job of imitating both. We have reached an inflection point in the capacity of generative AI to create convincing audiovisual content. Although many of these AI-generated works may not yet appear entirely realistic, they are convincing enough to challenge the notion that movies or concerts necessitate actual human performance.

Notable Examples in the Entertainment Industry

One significant development involved the Irish filmmaker Ruairi Robinson, who produced a hyper-realistic 15-second clip of Tom Cruise and Brad Pitt engaging in a rooftop fistfight using ByteDance’s Seedance 2.0 AI video generator. This instance has elicited both awe and unease within the entertainment industry.

In the music sector, AI company Codible Ventures voice-cloned Arijit Singh, the most followed artist on Spotify, without his permission. This misuse of his likeness in advertising led Singh to petition the Bombay High Court for an injunction to protect his personality rights.

Concerns Over Deepfakes

The Pitt/Cruise video has ignited concerns among actors, studios, and other filmmakers, marking a shift from theoretical risks to tangible professional anxieties. The Screen Actors Guild (SAG-AFTRA) condemned the technology as a threat to performers’ rights, while Charles Rivkin, chairman of the Motion Picture Association, expressed similar concerns.

AI-generated music is causing similar disruptions, with the Recording Industry Association of America suing AI music generators Suno and Udio for mass infringement of copyrighted sound recordings. Recently, a coalition of artist groups launched a “Say No to Suno” campaign, accusing the platform of diluting royalty pools through the proliferation of AI-generated tracks.

Broader Implications of Deepfakes

Deepfakes pose risks beyond entertainment, particularly in sexually and politically exploitative contexts. In January 2024, explicit AI-generated images of Taylor Swift circulated widely on social media, prompting outrage and calls for new laws against deepfake pornography.

Additionally, an AI-generated robocall mimicking President Biden’s voice misled New Hampshire Democrats about voting in the primary election. This incident prompted the FCC to impose a $6 million fine on the responsible political consultant.

Legal Challenges and New Statutes

While the examples involving Pitt, Cruise, and Singh raise concerns about labor substitution and intellectual property violations, they also highlight more severe threats to personal dignity and electoral integrity. The law must navigate creative uses of AI while addressing these risks.

As courts struggle to apply existing legal frameworks to generative AI, it becomes evident that traditional copyright laws are ill-suited for dealing with statistical learning from vast datasets. The right of publicity, which assumes discrete acts of appropriation, fails to address the synthesis of identity traits by generative AI models.

Legislative Responses

In response to perceived gaps in existing laws, a wave of new statutes targeting deepfakes has emerged. These laws primarily address sexual content, election manipulation, and exploitation of celebrity performances. Although these developments reflect the unique risks posed by generative AI, they may conflict with First Amendment protections for creative usage and intermediary liability protections for platforms hosting user-generated content.

Implications for the Legal System

As deepfakes undermine the reliability of audiovisual evidence, the legal system must adapt. This may require the introduction of new authentication norms, technical watermarking regimes, and rebuttable presumptions regarding synthetic content. However, any mandatory measures must withstand First Amendment scrutiny.

Future Considerations

The rapid advancements in generative AI challenge foundational legal assumptions. As courts apply old laws to new technologies and legislatures draft new regulations, it will be crucial for creators in the entertainment industry to:

  • Track and sample technological developments: Stay aware of changes in technology and explore ways to supplement human creativity with generative AI while respecting others’ rights.
  • Protect intellectual property: Register copyrights and trademarks, remain vigilant against infringement, and be prepared for increased enforcement efforts.
  • Address AI rights in contracts: Review agreements to clarify the use of voices, likenesses, and performances in generative AI systems, ensuring explicit provisions regarding consent and ownership.
  • Follow changes in the law: Keep updated on ongoing disputes and the evolving landscape of deepfake-specific legislation.

While the future of technology and law in this domain remains uncertain, staying informed about both new technologies and legal frameworks will be essential for protecting interests amidst constant change.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...