AI Regulation: States Lead as Federal Oversight Shifts

The AI Enforcement Seesaw: Federal Retreat Meets State Advance

On December 19, New York Governor Kathy Hochul signed the RAISE Act, making New York the first state to enact major AI safety legislation following President Trump’s December 11 executive order calling for federal preemption of state AI laws. Three days later, the FTC voted 2-0 to vacate its 2024 consent order against Rytr LLC, an AI writing tool, explicitly citing the Trump Administration’s AI Action Plan.

This juxtaposition captures a new regulatory reality for in-house counsel at companies deploying AI: a federal pullback does not equate to regulatory relief. Instead, states are stepping in, and compliance obligations are multiplying, not simplifying.

Federal Enforcement Is Narrowing—But Not Disappearing

The Rytr decision signals a significant shift in the FTC’s approach to AI enforcement under new leadership. The original 2024 complaint, brought under former Chair Lina Khan, alleged that Rytr’s AI review-generation tool could produce false reviews but did not claim that anyone actually posted fake reviews using the tool. The new FTC, led by Chair Andrew Ferguson, found this insufficient.

The agency’s reasoning is instructive. In vacating the order, the Commission stated that the original complaint “contains no allegations that Rytr itself created deceptive marketing material, only that its customers might have used its tool to do so.” BCP Director Christopher Mufarrige bluntly noted, “Condemning a technology or service simply because it potentially could be used in a problematic manner is inconsistent with the law and ordered liberty.”

This represents a doctrinal shift from potential harm to actual harm as the threshold for AI enforcement. Under this framework, neutral AI tools—those with legitimate uses alongside potential for misuse—face a higher bar for FTC action. The agency will require evidence that the tool was actually used to deceive consumers, not merely that it could be.

However, this does not mean deregulation across the board. On the same day it vacated Rytr, the FTC sent warning letters to ten companies regarding fake reviews under the Consumer Review Rule. The message is clear: the FTC will still act on consumer deception, but with a higher evidentiary bar for AI-specific theories and less appetite for speculative harms. Companies making affirmative misrepresentations about their AI capabilities—the AI washing cases—remain in the enforcement crosshairs.

States Aren’t Waiting for Federal Resolution

As federal enforcement recalibrates, states are accelerating their regulatory efforts. New York’s RAISE Act requires frontier AI developers to publish safety protocols and report safety incidents to the state within 72 hours of determination. The law creates a new oversight office within the Department of Financial Services, with violations carrying penalties up to $1 million for first offenses and $3 million for subsequent violations. This law takes effect on January 1, 2027.

New York joins California, which enacted its Transparency in Frontier AI Act (effective January 2026) with similar developer transparency requirements. Together, these laws create a potential bicoastal de facto standard for frontier AI development—requirements that apply regardless of what happens in federal preemption litigation.

The state activity extends well beyond these flagship laws. Texas’s Responsible AI Governance Act took effect in July 2025, establishing governance requirements for AI used in consequential decisions. Colorado’s AI Act becomes effective in February 2026, requiring deployers to use reasonable care to avoid algorithmic discrimination. Additionally, on December 19—the same day Hochul signed the RAISE Act—nearly two dozen state attorneys general sent a letter to the FCC urging it not to preempt state AI laws as contemplated by the Trump executive order.

The federal-state tension is intensifying, not resolving. Until courts rule on preemption challenges, state laws remain enforceable.

What Deployers Should Do Now

For companies using AI systems—as opposed to building them—four priorities emerge from this regulatory moment:

  • Continue vendor due diligence. Federal enforcement may be narrowing, but state enforcement is not. Your AI vendors’ compliance posture matters—perhaps more than before, given the patchwork of state requirements. When evaluating vendors, ask specifically about their state-law compliance programs for Texas, Colorado, New York, and California.
  • Map your state exposure. Which states’ laws apply to your operations? The answer depends on where you operate, where your customers are, and where decisions affecting consumers are made. Inventory your obligations before the next compliance deadline arrives—Colorado’s February 2026 effective date is closer than it appears.
  • Update incident response procedures. New York’s 72-hour reporting requirement for safety incidents is aggressive. If your AI vendor experiences an incident, your internal workflows need to support rapid assessment and notification. This requires defined escalation paths, pre-drafted notification templates, and clear authority to make disclosure decisions under time pressure.
  • Review vendor contracts. Do your AI vendor agreements include state-law compliance representations? Incident notification obligations? Audit rights that would let you verify compliance? If the contract predates the current wave of state AI laws, the answer is likely no. Consider whether amendments are warranted.

The regulatory landscape hasn’t simplified—it’s bifurcated. Companies that assumed federal preemption would create breathing room may find the opposite: a narrower federal enforcement theory paired with active state enforcement creates compliance complexity, not relief. The prudent approach is to treat state requirements as durable obligations that will persist regardless of how federal preemption battles resolve.

The seesaw isn’t balanced. It’s in motion—and it’s moving toward the states.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...