FTC’s Limited Authority to Preempt State AI Laws

The FTC’s AI Preemption Authority is Limited

Can the Trump administration preempt state consumer protection laws governing AI? The Federal Trade Commission (FTC) will soon try, but the agency’s authority to preempt state laws is limited.

Last December, President Trump issued an Executive Order (EO) titled “Ensuring a National Policy Framework for Artificial Intelligence.” The EO directs the Chairman of the FTC to issue a policy statement “explain[ing] the circumstances under which State laws that require alterations to the truthful outputs of AI models are preempted by the Federal Trade Commission Act’s prohibition on engaging in deceptive acts or practices affecting commerce.” Pursuant to the order, the agency has until March 11 to issue the statement.

Understanding the FTC Act

Section 5 of the FTC Act prohibits unfair or deceptive acts or practices in commerce. The EO focuses on deception, defined as a misrepresentation, omission, or other practice that misleads a consumer acting reasonably in the circumstances, to the consumer’s detriment. For example, false advertising misrepresents the quality or usefulness of a product, tricking consumers into buying it.

The EO claims some states have enacted laws “requiring entities to embed ideological bias within [AI] models.” This follows an earlier order banning so-called “woke AI” from the federal government—that is, models with built-in “social agendas” that “distort the quality and accuracy of the output.” Accordingly, the new order asserts that it is inherently deceptive for state AI laws to force companies to train their models to lie or mislead users regarding politically charged topics.

Ironically, the EO itself may be deceptive: there are no such laws currently in effect. Grievances about woke AI—notably, Gemini’s generation of an all-black roster of founding fathers—reflect training decisions by the AI companies themselves, not state laws. The First Amendment protects such design choices, if made independent of government, from both state and federal regulation.

Challenges to Preemption

Any preemption effort must clear several hurdles. The first is federalism. The doctrine of federal preemption flows from the US Constitution’s Supremacy Clause, which mandates that federal law is “the supreme Law of the Land.” Accordingly, federal law supersedes—i.e., preempts—conflicting state laws.

The federal government can preempt state law in several ways. The simplest is for a federal statute or regulation to contain explicit preemptive language. The federal government can also impliedly preempt state law by passing a federal law that occupies an entire field of regulation, like nuclear safety.

The FTC Act does neither: Section 5 neither explicitly preempts state law nor occupies the entire field of consumer protection regulation. To the contrary, every state has its own consumer protection laws, and the FTC frequently collaborates with states on enforcement.

Conflict Preemption

That leaves the FTC with only one option: conflict preemption. Federal law preempts state law when it is impossible to comply with both. Section 5 prohibits “deceptive acts or practices in or affecting commerce.” In theory, a state law that required companies to deceive consumers would conflict with Section 5 because it would be impossible to both abstain from and engage in deceptive business practices.

However, courts are unlikely to accept Section 5 as a basis for conflict preemption. When assessing preemption claims, the Supreme Court follows a “presumption against preemption,” under which federal law does not supersede state law “unless that was the clear and manifest purpose of Congress.” Section 5, in contrast, “was deliberately framed in general terms” and provides no specific prescriptive rule.

Rulemaking Requirements

Thus, to preempt state authority over AI, the FTC is required to issue a rule and must comply with both the Administrative Procedure Act and its own heightened rulemaking procedures under the Magnuson-Moss Act. Those procedures require an advance notice of proposed rulemaking that precedes a notice of proposed rulemaking, each accompanied by adequate opportunities for public comment.

Next, the agency issues a preliminary regulatory analysis, resulting in a final regulatory analysis, likely concluding with hearings on disputed issues of material fact. The FTC must also show that the deceptive conduct in question is “prevalent” by either issuing cease and desist orders or pointing to “information” that “indicates a widespread pattern.” The process could easily take multiple years.

Specific Examples and Limitations

Any rulemaking undertaken pursuant to the EO would be required to specify how state AI laws might conflict with Section 5. The EO itself points to only one example: Colorado’s Artificial Intelligence Act, which has not taken effect, and whose practical impacts are unclear.

According to the EO, Colorado’s prohibition on algorithmic discrimination could “force AI models to produce false results in order to avoid a ‘differential treatment or impact’ on protected groups.” However, that provision aims to prevent AI from replicating existing bias in training data sets, like “historical decisions about hiring or lending.” Colorado would likely argue that its law, far from forcing “false results,” instead requires models to avoid replicating existing distortions.

Ultimately, the FTC’s ability to preempt state AI laws is limited, requiring a lengthy, complex rulemaking process. A policy statement simply will not suffice.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...