Some States Step Up Early to Regulate AI Risk Management

In the face of an escalating global AI arms race, U.S. states are emerging as pivotal players in the regulation of AI risk management. As the landscape shifts, it is evident that states like Colorado and Utah are taking proactive measures to legislate the use of AI technologies with a focus on consumer protection.

Key Takeaways

  • A global AI arms race presents an opportunity for U.S. states to regulate AI risks effectively.
  • Colorado and Utah have enacted significant legislation governing AI consumer interactions.
  • Other states are emphasizing existing laws that can address AI-related issues.

In recent developments, an executive order from 2023 concentrating on AI safety was swiftly replaced by a new directive emphasizing U.S. leadership in AI innovation. Meanwhile, the European Union has shifted its focus from a liability directive to a more ambitious 2025 work program aimed at enhancing competitiveness.

The rapid advancement of AI technologies can lead to unintended consequences, as demonstrated by the rise and subsequent downfall of DeepSeek, an open-source large language model that was quickly hacked and discredited. This incident underscores the challenges posed by a “move fast and break things” mentality in technology development.

Legislation in Colorado

The Colorado Artificial Intelligence Act (CAIA), set to take effect on February 1, 2026, targets developers and deployers of high-risk AI systems. A high-risk AI system is defined as one that significantly influences consequential decisions affecting consumers, such as those related to education, employment, financial services, and healthcare.

One of the standout features of the CAIA is its comprehensive mitigation strategies, which include a safe harbor provision for entities that adhere to the National Institute of Standards and Technology’s AI Risk Management (NIST AI RMF). This framework provides voluntary guidelines for managing AI risks throughout the AI system’s lifecycle, emphasizing principles such as reliability, safety, transparency, and fairness.

Moreover, the CAIA mandates that affected entities conduct annual impact assessments, ensuring thorough documentation and transparency regarding the data used, metrics considered, and user safeguards implemented.

Utah’s Approach to AI Regulation

Following suit, Utah has also adopted forward-thinking legislation with the Utah Artificial Intelligence Policy Act (UAIP), effective since May 2024. The UAIP aims to bolster consumer protections while fostering responsible AI innovation through:

  • Mandatory transparency via consumer disclosure requirements.
  • Clarification of liability for AI business operations.
  • The establishment of a regulatory sandbox to facilitate responsible AI development.

A unique aspect of the UAIP is its incorporation of regulatory mitigation agreements (RMAs), which allow AI developers to test their technologies in a controlled environment while addressing potential risks. The UAIP’s emphasis on cybersecurity is particularly noteworthy, as it seeks to establish standards for this critical area.

Other States Joining the Effort

As of the time of this article, several states, including Virginia, have introduced AI-related legislation, echoing the approaches taken by Colorado and Utah. Connecticut and New Mexico are also exploring new legislative measures to address AI challenges.

State attorneys general, such as AG Rosenblum of Oregon, have been vocal about the existing regulatory frameworks that apply to AI, underscoring the necessity for organizations to comply with laws governing consumer protection and unfair trade practices.

Preparing for the Future

While the landscape of AI regulation is still evolving, it is clear that states are laying the groundwork for future industry standards. Organizations must assess whether their policies and procedures are sufficiently robust to align with emerging legal obligations and voluntary guidance frameworks like the NIST AI RMF.

The proactive legislative actions by states reflect a growing recognition of the importance of managing AI risks while harnessing its potential for innovation and growth.

More Insights

Understanding the EU AI Act: Key Highlights and Implications

The EU's Artificial Intelligence Act categorizes AI systems based on their risk levels, prohibiting high-risk systems and imposing strict regulations on those deemed high-risk. The legislation aims to...

Tech Giants Clash with EU Over AI Transparency: Creatives Demand Fair Compensation

The European Union's AI Act, the world's first law regulating artificial intelligence, requires AI companies to notify rightsholders when their works are used for training algorithms. As tech giants...

The Dangers of AI-Washing in Nutrition

AI-washing is a deceptive marketing tactic where companies exaggerate the role of AI in promoting their products or services, potentially misleading consumers. As AI becomes more integrated into the...

Understanding the Implications of the AI Act for Businesses

The AI Act, published by the EU, establishes the world's first comprehensive legal framework governing artificial intelligence, requiring businesses to identify and categorize their AI systems for...

Establishing AI Guardrails for Compliance and Trust

As the EU's AI Act comes into full force in 2026, businesses globally will face challenges due to the lack of standardisation in AI regulation, creating compliance uncertainty. Implementing AI...

Arkansas Protects Citizens with New AI Likeness Law

Arkansas has enacted HB1071, a law aimed at protecting individuals from unauthorized AI-generated likenesses for commercial use, requiring explicit consent for such replication. This legislation...

Tech Giants Resist Key Changes to EU AI Regulations

The EU AI Act is regarded as the most comprehensive set of regulations for artificial intelligence, yet it lacks specific implementation details. Currently, tech giants are pushing back against the...

Connecticut’s Crucial AI Regulation Debate

The ongoing public hearing in Hartford focuses on the need for regulation of artificial intelligence (AI) systems in Connecticut, emphasizing the potential risks of unchecked technology. Supporters...

Promoting Inclusive AI Through Evidence-Based Action

The essay discusses the need for inclusive AI practices and the importance of reviewing evidence from diverse public voices to ensure that marginalized groups are represented in AI decision-making. It...