Copy That: Secondary Liability in the Age of AI
Artificial intelligence (AI) simplifies the processes of creating, remixing, and distributing content at scale. This speed contributes significantly to its value, but it also introduces various intellectual property (IP) risks. These risks extend beyond just the end users generating AI outputs; they can also implicate the companies that develop, host, integrate, or deploy these tools.
Legal Precedent: MGM Studios Inc. v. Grokster, Ltd.
A crucial legal reference point is MGM Studios Inc. v. Grokster, Ltd., 545 U.S. 913 (2005), a landmark case concerning secondary liability. In this case, Grokster distributed peer-to-peer software that had lawful uses, but the court sought to determine whether the company encouraged infringement. The Supreme Court’s focus was on inducement, concluding that a product, even if it can be used legally, can still subject a company to secondary liability if its messaging, product choices, or business model seem designed to promote impermissible infringement.
AI Models and Inducement
This principle applies to today’s AI models, which are often general-purpose. Disputes frequently arise based on what the product encourages users to do. When credible warning signs appear, the spotlight shifts to how the company responds.
Assessing AI Secondary Liability Claims
When evaluating how to frame a claim of secondary liability in AI, consider the following questions:
- What are we encouraging, even indirectly? Marketing materials, tutorials, and example prompts can serve as implicit “how-to” guides. If templates are designed to closely replicate branded characters, a plaintiff could argue that the product promotes infringement.
- Can we tell a strong lawful-use story? It’s vital that “substantial non-infringing use” is genuine and central to the product. A tool primarily used for internal drafting and meeting summaries is easier to defend than one aimed at rewriting paywalled articles.
- What do we know, and when did we know it? Credible notices, repeated complaints, and internal metrics indicating obvious infringement patterns can undermine arguments of a lack of knowledge. Inaction may begin to appear as a decision in itself.
- How much control do we have, and are we monetizing the risk? If a company can supervise usage through accounts and moderation, and profits from high-volume usage, claimants may argue that the company had both the ability to intervene and a financial motive not to act.
Maintaining a Defensible Posture
To uphold a defensible stance, companies should implement documented, repeatable governance throughout the AI lifecycle. This includes:
- Traceability of training data
- Policies for customer fine-tuning on third-party content
- Monitoring output patterns that suggest replication
- A clear process for managing repeat users who make high-risk requests
Furthermore, product features, contract language, and marketing materials should align with the actual functionalities of the tool. The objective is to demonstrate that foreseeable risks were anticipated, reasonable design and operational choices were made to mitigate them, and improvements were enacted based on observations in production.