States at Risk: The Impact of a Federal AI Moratorium

The State AI Laws Likeliest To Be Blocked by a Moratorium

In the coming weeks, the United States Senate is expected to ramp up consideration of a sprawling budget bill that could block states from enforcing artificial intelligence regulations for a decade. This provision, which has faced significant opposition from state lawmakers and advocacy groups, was approved by House Republicans last month.

The primary goal of this measure is to eliminate what is perceived as a cumbersome patchwork of AI regulations across the nation that could hinder innovation. However, critics warn that the federal moratorium may preempt a wide array of existing regulations, including those concerning AI in healthcare, algorithmic discrimination, deepfakes, and online child abuse.

Legal experts have cautioned that significant uncertainty remains regarding which specific laws would be preempted by the bill. In response, a non-profit organization opposing the moratorium has released new research examining which state AI laws would be at risk if the moratorium is adopted.

Overview of the Report

The report by Americans for Responsible Innovation evaluates the likelihood of over a dozen state laws being blocked by the proposed moratorium, categorizing them as “likely,” “possible,” or “unlikely.”

The Laws Likeliest to Be Blocked

The federal moratorium is expected to impact a wide range of public interest state AI legislation, particularly laws that impose transparency requirements on AI services or target algorithmic discrimination.

Among the laws at high risk are:

  • The Colorado AI Act (SB 24-205) – This law establishes a reasonable duty of care standard for developers of high-risk AI systems to prevent discrimination and mitigate harms. The report suggests it would “likely be voided” due to its regulatory framework.
  • Utah’s Artificial Intelligence Policy Act (SB 149) – This act mandates disclosure to consumers when interacting with generative AI products.
  • California’s Artificial Intelligence Transparency Act (SB 942) – This legislation requires clear disclosures for AI-generated audio and visual materials.
  • Illinois Laws – Two consumer protection laws, one prohibiting AI in hiring decisions that lead to discrimination and another requiring consent from applicants when using AI for evaluations, are also at high risk.

The Laws That Could Be Blocked

For some state AI laws, the applicability of the moratorium is less clear. The report notes that the moratorium’s language could potentially capture laws addressing social media systems, data privacy protections, and other algorithmic technologies.

Potentially affected laws include:

  • Texas Data Privacy and Security Act (HB 4) – This act creates consumer rights to opt-out of profiling, which may regulate AI technologies.
  • Maine’s LD 1585 – This legislation targets facial recognition technology and contains restrictions on third-party agreements.
  • Connecticut’s SB 3 – This law restricts certain social media system design features and automated targeted advertising.
  • New York’s Stop Addictive Feeds Exploitation For Kids Act (SB 7694A) – This landmark legislation aims to ban “addictive” social media algorithms.
  • Utah Minor Protection in Social Media Act (SB 194) – This act restricts platform features that can lead to excessive use.
  • Tennessee’s ELVIS Act (SB 2096) – This law prohibits creating unauthorized deepfakes.
  • Virginia’s Synthetic Digital Content Act (HB 2124) – This legislation criminalizes the use of synthetic media to defraud or defame.

The Laws Likeliest to Dodge the Moratorium

According to the report, laws that primarily deal with how technology is incorporated by state governments are less likely to be obstructed. During negotiations, language was inserted into the bill stipulating that only regulations dealing with AI entering interstate commerce would be affected.

Notable measures likely to remain unaffected include:

  • California’s Generative Artificial Intelligence Accountability Act (SB 896) – This law outlines rules for how state agencies should deploy AI technology.
  • New Hampshire’s HB 1688 – Similar to California’s measure, this act focuses on state agency deployment of AI.

Status in Limbo

The report is based on the moratorium provision as it was written into the reconciliation package passed by the House. However, it remains uncertain whether the AI provision will survive in the Senate and whether it will meet the eligibility criteria under the “Byrd Rule”.

The Senate Commerce Committee has released its version of the reconciliation package, which includes a revised 10-year moratorium on state AI laws. States would be blocked from receiving federal broadband subsidies if they do not temporarily halt the enforcement of state AI laws during this period.

This linkage to federal funding could enhance the provision’s chances of surviving the legislative process. Nonetheless, political support remains uncertain, with resistance from both Democrats and some Republicans.

More Insights

Responsible AI Strategies for Enterprise Success

In this post, Joseph Jude discusses the complexities of implementing Responsible AI in enterprise applications, emphasizing the conflict between ideal principles and real-world business pressures. He...

EU Guidelines on AI Models: Preparing for Systemic Risk Compliance

The European Commission has issued guidelines to assist AI models identified as having systemic risks in complying with the EU's artificial intelligence regulation, known as the AI Act. Companies face...

Governance in the Age of AI: Balancing Opportunity and Risk

Artificial intelligence (AI) is rapidly transforming business operations and decision-making processes in the Philippines, with the domestic AI market projected to reach nearly $950 million by 2025...

Microsoft Embraces EU AI Code While Meta Withdraws

Microsoft is expected to sign the European Union's code of practice for artificial intelligence, while Meta Platforms has declined to do so, citing legal uncertainties. The code aims to ensure...

Microsoft Embraces EU AI Code While Meta Withdraws

Microsoft is expected to sign the European Union's code of practice for artificial intelligence, while Meta Platforms has declined to do so, citing legal uncertainties. The code aims to ensure...

Colorado’s Groundbreaking AI Law Sets New Compliance Standards

Analysts note that Colorado's upcoming AI law, which takes effect on February 1, 2026, is notable for its comprehensive requirements, mandating businesses to adopt risk management programs for...

Strengthening Ethical AI: Malaysia’s Action Plan for 2026-2030

Malaysia's upcoming AI Technology Action Plan 2026–2030 aims to enhance ethical safeguards and governance frameworks for artificial intelligence, as announced by Digital Minister Gobind Singh Deo. The...

Simultaneous Strategies for AI Governance

The development of responsible Artificial Intelligence (AI) policies and overall AI strategies must occur simultaneously to ensure alignment with intended purposes and core values. Bhutan's unique...

Guidelines for AI Models with Systemic Risks Under EU Regulations

The European Commission has issued guidelines to assist AI models deemed to have systemic risks in complying with the EU's AI Act, which will take effect on August 2. These guidelines aim to clarify...