AI Copyright Risks in the Era of Federal Regulation

Copyright Law Set to Govern AI Under Trump’s Executive Order

The legal landscape for artificial intelligence is entering a period of rapid consolidation. With President Donald Trump’s executive order in December 2025 establishing a national AI framework, the era of conflicting state-level rules may be drawing to a close.

However, this doesn’t signal a reduction in AI-related legal risk. Instead, it marks the beginning of a different kind of scrutiny—one centered not on regulatory innovation but on the most powerful legal instrument already available to federal courts: copyright law.

Emerging Legal Risks

The lesson emerging from recent AI litigation, most prominently Bartz v. Anthropic PBC, is that the greatest potential liability to AI developers doesn’t come from what their models generate but from how those models were trained and the provenance of the content used in that training.

As the federal government asserts primacy over AI governance, the decisive question will be whether developers can demonstrate that their training corpora were acquired lawfully, licensed appropriately (unless in the public domain), and documented thoroughly.

Inputs Are Key

To date, no U.S. court has held that an AI model’s outputs become infringing derivative works solely because it was trained on copyrighted content. However, using an AI program to copy and incorporate certain copyrighted elements constitutes infringement.

Courts have focused on two settled propositions:

  • Training on lawfully acquired materials can qualify as fair use. Both Bartz and Kadrey v. Meta Platforms, Inc. held that using legally obtained books in large-scale training is “quintessentially transformative,” a major factor courts consider in determining fair use.
  • Fair use collapses when the underlying material was unlawfully obtained. This distinction is crucial. When training data includes pirated works or copies acquired outside a licensing framework, the fair-use defense evaporates, leaving straightforward infringement under 17 USC Section 106.

Bartz as Blueprint

Judge William Alsup’s ruling in Bartz bifurcated the case with precision: Training on legally purchased or licensed works qualified as fair use, while training on pirated copies did not. The court found Anthropic’s use of purchased books “exceedingly transformative,” supporting fair use. However, its downloading of over seven million pirated books led to a damages trial.

The case’s significance came when Alsup certified a class of 482,460 copyright holders whose works appeared in datasets allegedly downloaded from shadow libraries, transforming modest damages into an existential threat with potential liabilities exceeding $360 million.

Executive Order’s Impact

Trump’s executive order, often described as deregulatory, actually recentralizes AI-related legal risk. A unified federal standard means federal courts will dominate AI enforcement. The order’s creation of a Department of Justice AI litigation task force and its directive to the Department of Commerce to identify conflicting state laws ensure that future AI disputes will migrate exclusively to federal forums.

Moreover, copyright law, already fully developed and used in federal courts, will become central. The Copyright Act provides strict liability and statutory damages without proof of harm, amplifying intellectual property exposure as state AI laws are increasingly disregarded.

Licensed Training Data

Fair use isn’t a blanket excuse for opaque data acquisition. It protects only those who start with lawfully obtained copies, and it can’t cure pirated inputs or shadow library corpora. Licensing, by contrast, is a complete defense to the reproduction right most implicated in AI training. A model trained on licensed data eliminates statutory-damage exposure tied to unlawful acquisition and preserves fair-use arguments.

Outlook

Trump’s executive order may limit the regulatory chaos created by competing state mandates, but it won’t eliminate AI risk. It will shift the arena. With national preemption will come increased reliance on copyright law, making it essential for companies to demonstrate that their models rest on lawfully obtained and licensed data.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...