States Take Lead in AI Regulation Initiatives

Some States Step Up Early to Regulate AI Risk Management

In the face of an escalating global AI arms race, U.S. states are emerging as pivotal players in the regulation of AI risk management. As the landscape shifts, it is evident that states like Colorado and Utah are taking proactive measures to legislate the use of AI technologies with a focus on consumer protection.

Key Takeaways

  • A global AI arms race presents an opportunity for U.S. states to regulate AI risks effectively.
  • Colorado and Utah have enacted significant legislation governing AI consumer interactions.
  • Other states are emphasizing existing laws that can address AI-related issues.

In recent developments, an executive order from 2023 concentrating on AI safety was swiftly replaced by a new directive emphasizing U.S. leadership in AI innovation. Meanwhile, the European Union has shifted its focus from a liability directive to a more ambitious 2025 work program aimed at enhancing competitiveness.

The rapid advancement of AI technologies can lead to unintended consequences, as demonstrated by the rise and subsequent downfall of DeepSeek, an open-source large language model that was quickly hacked and discredited. This incident underscores the challenges posed by a “move fast and break things” mentality in technology development.

Legislation in Colorado

The Colorado Artificial Intelligence Act (CAIA), set to take effect on February 1, 2026, targets developers and deployers of high-risk AI systems. A high-risk AI system is defined as one that significantly influences consequential decisions affecting consumers, such as those related to education, employment, financial services, and healthcare.

One of the standout features of the CAIA is its comprehensive mitigation strategies, which include a safe harbor provision for entities that adhere to the National Institute of Standards and Technology’s AI Risk Management (NIST AI RMF). This framework provides voluntary guidelines for managing AI risks throughout the AI system’s lifecycle, emphasizing principles such as reliability, safety, transparency, and fairness.

Moreover, the CAIA mandates that affected entities conduct annual impact assessments, ensuring thorough documentation and transparency regarding the data used, metrics considered, and user safeguards implemented.

Utah’s Approach to AI Regulation

Following suit, Utah has also adopted forward-thinking legislation with the Utah Artificial Intelligence Policy Act (UAIP), effective since May 2024. The UAIP aims to bolster consumer protections while fostering responsible AI innovation through:

  • Mandatory transparency via consumer disclosure requirements.
  • Clarification of liability for AI business operations.
  • The establishment of a regulatory sandbox to facilitate responsible AI development.

A unique aspect of the UAIP is its incorporation of regulatory mitigation agreements (RMAs), which allow AI developers to test their technologies in a controlled environment while addressing potential risks. The UAIP’s emphasis on cybersecurity is particularly noteworthy, as it seeks to establish standards for this critical area.

Other States Joining the Effort

As of the time of this article, several states, including Virginia, have introduced AI-related legislation, echoing the approaches taken by Colorado and Utah. Connecticut and New Mexico are also exploring new legislative measures to address AI challenges.

State attorneys general, such as AG Rosenblum of Oregon, have been vocal about the existing regulatory frameworks that apply to AI, underscoring the necessity for organizations to comply with laws governing consumer protection and unfair trade practices.

Preparing for the Future

While the landscape of AI regulation is still evolving, it is clear that states are laying the groundwork for future industry standards. Organizations must assess whether their policies and procedures are sufficiently robust to align with emerging legal obligations and voluntary guidance frameworks like the NIST AI RMF.

The proactive legislative actions by states reflect a growing recognition of the importance of managing AI risks while harnessing its potential for innovation and growth.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...