States at Risk: The Impact of a Federal AI Moratorium

The State AI Laws Likeliest To Be Blocked by a Moratorium

In the coming weeks, the United States Senate is expected to ramp up consideration of a sprawling budget bill that could block states from enforcing artificial intelligence regulations for a decade. This provision, which has faced significant opposition from state lawmakers and advocacy groups, was approved by House Republicans last month.

The primary goal of this measure is to eliminate what is perceived as a cumbersome patchwork of AI regulations across the nation that could hinder innovation. However, critics warn that the federal moratorium may preempt a wide array of existing regulations, including those concerning AI in healthcare, algorithmic discrimination, deepfakes, and online child abuse.

Legal experts have cautioned that significant uncertainty remains regarding which specific laws would be preempted by the bill. In response, a non-profit organization opposing the moratorium has released new research examining which state AI laws would be at risk if the moratorium is adopted.

Overview of the Report

The report by Americans for Responsible Innovation evaluates the likelihood of over a dozen state laws being blocked by the proposed moratorium, categorizing them as “likely,” “possible,” or “unlikely.”

The Laws Likeliest to Be Blocked

The federal moratorium is expected to impact a wide range of public interest state AI legislation, particularly laws that impose transparency requirements on AI services or target algorithmic discrimination.

Among the laws at high risk are:

  • The Colorado AI Act (SB 24-205) – This law establishes a reasonable duty of care standard for developers of high-risk AI systems to prevent discrimination and mitigate harms. The report suggests it would “likely be voided” due to its regulatory framework.
  • Utah’s Artificial Intelligence Policy Act (SB 149) – This act mandates disclosure to consumers when interacting with generative AI products.
  • California’s Artificial Intelligence Transparency Act (SB 942) – This legislation requires clear disclosures for AI-generated audio and visual materials.
  • Illinois Laws – Two consumer protection laws, one prohibiting AI in hiring decisions that lead to discrimination and another requiring consent from applicants when using AI for evaluations, are also at high risk.

The Laws That Could Be Blocked

For some state AI laws, the applicability of the moratorium is less clear. The report notes that the moratorium’s language could potentially capture laws addressing social media systems, data privacy protections, and other algorithmic technologies.

Potentially affected laws include:

  • Texas Data Privacy and Security Act (HB 4) – This act creates consumer rights to opt-out of profiling, which may regulate AI technologies.
  • Maine’s LD 1585 – This legislation targets facial recognition technology and contains restrictions on third-party agreements.
  • Connecticut’s SB 3 – This law restricts certain social media system design features and automated targeted advertising.
  • New York’s Stop Addictive Feeds Exploitation For Kids Act (SB 7694A) – This landmark legislation aims to ban “addictive” social media algorithms.
  • Utah Minor Protection in Social Media Act (SB 194) – This act restricts platform features that can lead to excessive use.
  • Tennessee’s ELVIS Act (SB 2096) – This law prohibits creating unauthorized deepfakes.
  • Virginia’s Synthetic Digital Content Act (HB 2124) – This legislation criminalizes the use of synthetic media to defraud or defame.

The Laws Likeliest to Dodge the Moratorium

According to the report, laws that primarily deal with how technology is incorporated by state governments are less likely to be obstructed. During negotiations, language was inserted into the bill stipulating that only regulations dealing with AI entering interstate commerce would be affected.

Notable measures likely to remain unaffected include:

  • California’s Generative Artificial Intelligence Accountability Act (SB 896) – This law outlines rules for how state agencies should deploy AI technology.
  • New Hampshire’s HB 1688 – Similar to California’s measure, this act focuses on state agency deployment of AI.

Status in Limbo

The report is based on the moratorium provision as it was written into the reconciliation package passed by the House. However, it remains uncertain whether the AI provision will survive in the Senate and whether it will meet the eligibility criteria under the “Byrd Rule”.

The Senate Commerce Committee has released its version of the reconciliation package, which includes a revised 10-year moratorium on state AI laws. States would be blocked from receiving federal broadband subsidies if they do not temporarily halt the enforcement of state AI laws during this period.

This linkage to federal funding could enhance the provision’s chances of surviving the legislative process. Nonetheless, political support remains uncertain, with resistance from both Democrats and some Republicans.

More Insights

Classifying Your AI System Under the EU AI Act Made Easy

The EU AI Act categorizes AI systems into four risk levels: Unacceptable, High-risk, Limited, and Minimal. Genbounty offers a free Risk Classification Wizard to help teams quickly determine their...

AI Legislation: Bridging Global Gaps at AIPPI 2025

The AIPPI 2025 congress in Yokohama will address crucial topics in AI law, such as artificial intelligence and copyright, compulsory licenses, and exhaustion of trademark rights. AIPPI president...

Colorado’s AI Act: New Compliance Challenges for Businesses

Last week, Colorado lawmakers decided to delay the implementation of the Colorado Artificial Intelligence Act (CAIA) until June 30, 2026, extending the timeline for businesses to prepare. The CAIA...

AI Surveillance: Ensuring Safety Without Sacrificing Privacy

AI-driven surveillance enhances safety through advanced technologies like facial recognition and behavior analysis, but it poses significant risks to privacy, civil liberties, and social equity. As...

Responsible AI in Finance: From Theory to Practice

The global discussion around artificial intelligence in finance has shifted towards responsible usage, emphasizing the importance of trust, compliance, and education. Startups like WNSTN AI are...

Building Trust in AI Through Certification for a Sustainable Future

The article discusses how certification can enhance trust in AI systems, transforming regulation from a constraint into a competitive advantage in the market. With frameworks like the EU's AI Act...

Trust in Explainable AI: Building Transparency and Accountability

Explainable AI (XAI) is crucial for fostering trust and transparency in critical fields like healthcare and finance, as regulations now require clear explanations of AI decisions. By empowering users...

Regulating AI: Balancing Innovation and Safety

Artificial Intelligence (AI) is a revolutionary technology that presents both immense potential and significant risks, particularly due to the opacity of its algorithms. Without regulation, AI can...

Responsible AI Workflows for Transforming UX Research

The article discusses how AI can transform UX research by improving efficiency and enabling deeper insights, while emphasizing the importance of human oversight to avoid biases and inaccuracies. It...