States Take Lead in AI Regulation Initiatives

Some States Step Up Early to Regulate AI Risk Management

In the face of an escalating global AI arms race, U.S. states are emerging as pivotal players in the regulation of AI risk management. As the landscape shifts, it is evident that states like Colorado and Utah are taking proactive measures to legislate the use of AI technologies with a focus on consumer protection.

Key Takeaways

  • A global AI arms race presents an opportunity for U.S. states to regulate AI risks effectively.
  • Colorado and Utah have enacted significant legislation governing AI consumer interactions.
  • Other states are emphasizing existing laws that can address AI-related issues.

In recent developments, an executive order from 2023 concentrating on AI safety was swiftly replaced by a new directive emphasizing U.S. leadership in AI innovation. Meanwhile, the European Union has shifted its focus from a liability directive to a more ambitious 2025 work program aimed at enhancing competitiveness.

The rapid advancement of AI technologies can lead to unintended consequences, as demonstrated by the rise and subsequent downfall of DeepSeek, an open-source large language model that was quickly hacked and discredited. This incident underscores the challenges posed by a “move fast and break things” mentality in technology development.

Legislation in Colorado

The Colorado Artificial Intelligence Act (CAIA), set to take effect on February 1, 2026, targets developers and deployers of high-risk AI systems. A high-risk AI system is defined as one that significantly influences consequential decisions affecting consumers, such as those related to education, employment, financial services, and healthcare.

One of the standout features of the CAIA is its comprehensive mitigation strategies, which include a safe harbor provision for entities that adhere to the National Institute of Standards and Technology’s AI Risk Management (NIST AI RMF). This framework provides voluntary guidelines for managing AI risks throughout the AI system’s lifecycle, emphasizing principles such as reliability, safety, transparency, and fairness.

Moreover, the CAIA mandates that affected entities conduct annual impact assessments, ensuring thorough documentation and transparency regarding the data used, metrics considered, and user safeguards implemented.

Utah’s Approach to AI Regulation

Following suit, Utah has also adopted forward-thinking legislation with the Utah Artificial Intelligence Policy Act (UAIP), effective since May 2024. The UAIP aims to bolster consumer protections while fostering responsible AI innovation through:

  • Mandatory transparency via consumer disclosure requirements.
  • Clarification of liability for AI business operations.
  • The establishment of a regulatory sandbox to facilitate responsible AI development.

A unique aspect of the UAIP is its incorporation of regulatory mitigation agreements (RMAs), which allow AI developers to test their technologies in a controlled environment while addressing potential risks. The UAIP’s emphasis on cybersecurity is particularly noteworthy, as it seeks to establish standards for this critical area.

Other States Joining the Effort

As of the time of this article, several states, including Virginia, have introduced AI-related legislation, echoing the approaches taken by Colorado and Utah. Connecticut and New Mexico are also exploring new legislative measures to address AI challenges.

State attorneys general, such as AG Rosenblum of Oregon, have been vocal about the existing regulatory frameworks that apply to AI, underscoring the necessity for organizations to comply with laws governing consumer protection and unfair trade practices.

Preparing for the Future

While the landscape of AI regulation is still evolving, it is clear that states are laying the groundwork for future industry standards. Organizations must assess whether their policies and procedures are sufficiently robust to align with emerging legal obligations and voluntary guidance frameworks like the NIST AI RMF.

The proactive legislative actions by states reflect a growing recognition of the importance of managing AI risks while harnessing its potential for innovation and growth.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...