State Legislatures Tackle 2025 AI Regulation Challenges

State Legislatures Consider New Wave of 2025 AI Legislation

State lawmakers are considering a diverse array of AI legislation, with hundreds of bills introduced in 2025. Many of these legislative proposals fall into several key categories:

  1. Comprehensive Consumer Protection
  2. Sector-Specific Legislation on Automated Decision-Making
  3. Chatbot Regulation
  4. Generative AI Transparency Requirements
  5. AI Data Center and Energy Usage Requirements
  6. Frontier Model Public Safety Legislation

These categories represent just a subset of current AI legislative activity, illustrating the major priorities of state legislatures and highlighting new AI laws that may be on the horizon.

Consumer Protection

Lawmakers in over a dozen states have introduced legislation aimed at reducing algorithmic discrimination in high-risk AI or automated decision-making systems used to make “consequential decisions.” This approach embraces the risk- and role-based strategy of the Colorado AI Act. Generally, these frameworks would establish developer and deployer duties of care to protect consumers from algorithmic discrimination, requiring reporting of risks or instances of algorithmic discrimination to state attorneys general. They would also mandate notices to consumers and disclosures to other parties, establishing consumer rights related to the AI system.

For instance, Virginia’s High-Risk AI Developer & Deployer Act (HB 2094) follows this model and passed out of Virginia’s legislature this month.

Sector-Specific Automated Decision-Making

Over a dozen states have introduced legislation to regulate the use of AI or automated decision-making tools (ADMT) in specific sectors, including healthcare, insurance, employment, and finance. For example:

  • Massachusetts HD 3750 would amend the state’s health insurance consumer protection law to require healthcare insurance carriers to disclose the use of AI or ADMT for reviewing insurance claims.
  • New York A773 would require banks using ADMT for lending decisions to conduct annual disparate impact analyses and disclose them to the New York Attorney General.

Additionally, states like Georgia (SB 164) and Illinois (SB 2255) would prohibit employers from using ADMT to set wages unless certain requirements are met.

Chatbots

Another key trend in 2025 AI legislation focuses on AI chatbots. Various bills, such as those in Hawaii (HB 639/SB 640), Idaho (HB 127), Illinois (HB 3021), Massachusetts (SD 2223), and New York (A222), would require chatbot providers to inform users prominently that they are not interacting with a human or impose liability on chatbot providers for misleading or deceptive communications.

Generative AI Transparency

State legislatures are also considering regulation of providers of generative AI systems and platforms hosting synthetic content. Bills like Washington HB 1170 and Florida HB 369 would require generative AI providers to include watermarks in AI-generated outputs and provide free AI detection tools for users. Other legislation, such as Illinois SB 1792 and Utah SB 226, would require generative AI owners to display notices informing users of the use of generative AI or warning them that AI-generated outputs may be inaccurate, inappropriate, or harmful.

AI Data Centers & Energy

Lawmakers are addressing the growing energy demands of AI development and related environmental concerns with introduced legislation. For instance, California AB 222 would require data centers to estimate and report the total energy used to develop certain large AI models. Similarly, Massachusetts HD 4192 would require AI developers and operators of greenhouse gas emission sources to monitor, track, and report environmental impacts and mitigations.

Frontier Model Public Safety

Following the passage and subsequent veto of California SB 1047, California State Senator Scott Wiener filed SB 53 to establish safeguards for the development of AI frontier models. Other states are considering legislation to address public safety risks posed by frontier or foundation models, generally defined as AI models meeting certain computational or monetary thresholds. For example, Illinois HB 3506 would require developers of certain large AI models to conduct risk assessments every 90 days and publish annual third-party audits.

Rhode Island H 5224 would impose strict liability on developers of covered AI models for all injuries to non-users that are factually and proximately caused by the covered model.

Although the likelihood of passage for these AI bills remains unclear, any state AI legislation that does pass is likely to have significant effects on the U.S. AI regulatory landscape, especially in the absence of federal action on AI. Monitoring of these and related AI developments is crucial moving forward.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...