The Generative Slate: As Digital Replicas Improve, Legal Issues Grow
This article explores the use of generative AI in the production and distribution of content, highlighting the rapid advancements and the accompanying legal challenges.
The Rise of Generative AI
Life may imitate art, but AI does a remarkable job of imitating both. We have reached an inflection point in the capacity of generative AI to create convincing audiovisual content. Although many of these AI-generated works may not yet appear entirely realistic, they are convincing enough to challenge the notion that movies or concerts necessitate actual human performance.
Notable Examples in the Entertainment Industry
One significant development involved the Irish filmmaker Ruairi Robinson, who produced a hyper-realistic 15-second clip of Tom Cruise and Brad Pitt engaging in a rooftop fistfight using ByteDance’s Seedance 2.0 AI video generator. This instance has elicited both awe and unease within the entertainment industry.
In the music sector, AI company Codible Ventures voice-cloned Arijit Singh, the most followed artist on Spotify, without his permission. This misuse of his likeness in advertising led Singh to petition the Bombay High Court for an injunction to protect his personality rights.
Concerns Over Deepfakes
The Pitt/Cruise video has ignited concerns among actors, studios, and other filmmakers, marking a shift from theoretical risks to tangible professional anxieties. The Screen Actors Guild (SAG-AFTRA) condemned the technology as a threat to performers’ rights, while Charles Rivkin, chairman of the Motion Picture Association, expressed similar concerns.
AI-generated music is causing similar disruptions, with the Recording Industry Association of America suing AI music generators Suno and Udio for mass infringement of copyrighted sound recordings. Recently, a coalition of artist groups launched a “Say No to Suno” campaign, accusing the platform of diluting royalty pools through the proliferation of AI-generated tracks.
Broader Implications of Deepfakes
Deepfakes pose risks beyond entertainment, particularly in sexually and politically exploitative contexts. In January 2024, explicit AI-generated images of Taylor Swift circulated widely on social media, prompting outrage and calls for new laws against deepfake pornography.
Additionally, an AI-generated robocall mimicking President Biden’s voice misled New Hampshire Democrats about voting in the primary election. This incident prompted the FCC to impose a $6 million fine on the responsible political consultant.
Legal Challenges and New Statutes
While the examples involving Pitt, Cruise, and Singh raise concerns about labor substitution and intellectual property violations, they also highlight more severe threats to personal dignity and electoral integrity. The law must navigate creative uses of AI while addressing these risks.
As courts struggle to apply existing legal frameworks to generative AI, it becomes evident that traditional copyright laws are ill-suited for dealing with statistical learning from vast datasets. The right of publicity, which assumes discrete acts of appropriation, fails to address the synthesis of identity traits by generative AI models.
Legislative Responses
In response to perceived gaps in existing laws, a wave of new statutes targeting deepfakes has emerged. These laws primarily address sexual content, election manipulation, and exploitation of celebrity performances. Although these developments reflect the unique risks posed by generative AI, they may conflict with First Amendment protections for creative usage and intermediary liability protections for platforms hosting user-generated content.
Implications for the Legal System
As deepfakes undermine the reliability of audiovisual evidence, the legal system must adapt. This may require the introduction of new authentication norms, technical watermarking regimes, and rebuttable presumptions regarding synthetic content. However, any mandatory measures must withstand First Amendment scrutiny.
Future Considerations
The rapid advancements in generative AI challenge foundational legal assumptions. As courts apply old laws to new technologies and legislatures draft new regulations, it will be crucial for creators in the entertainment industry to:
- Track and sample technological developments: Stay aware of changes in technology and explore ways to supplement human creativity with generative AI while respecting others’ rights.
- Protect intellectual property: Register copyrights and trademarks, remain vigilant against infringement, and be prepared for increased enforcement efforts.
- Address AI rights in contracts: Review agreements to clarify the use of voices, likenesses, and performances in generative AI systems, ensuring explicit provisions regarding consent and ownership.
- Follow changes in the law: Keep updated on ongoing disputes and the evolving landscape of deepfake-specific legislation.
While the future of technology and law in this domain remains uncertain, staying informed about both new technologies and legal frameworks will be essential for protecting interests amidst constant change.