States Take Lead in AI Regulation Initiatives

Some States Step Up Early to Regulate AI Risk Management

In the face of an escalating global AI arms race, U.S. states are emerging as pivotal players in the regulation of AI risk management. As the landscape shifts, it is evident that states like Colorado and Utah are taking proactive measures to legislate the use of AI technologies with a focus on consumer protection.

Key Takeaways

  • A global AI arms race presents an opportunity for U.S. states to regulate AI risks effectively.
  • Colorado and Utah have enacted significant legislation governing AI consumer interactions.
  • Other states are emphasizing existing laws that can address AI-related issues.

In recent developments, an executive order from 2023 concentrating on AI safety was swiftly replaced by a new directive emphasizing U.S. leadership in AI innovation. Meanwhile, the European Union has shifted its focus from a liability directive to a more ambitious 2025 work program aimed at enhancing competitiveness.

The rapid advancement of AI technologies can lead to unintended consequences, as demonstrated by the rise and subsequent downfall of DeepSeek, an open-source large language model that was quickly hacked and discredited. This incident underscores the challenges posed by a “move fast and break things” mentality in technology development.

Legislation in Colorado

The Colorado Artificial Intelligence Act (CAIA), set to take effect on February 1, 2026, targets developers and deployers of high-risk AI systems. A high-risk AI system is defined as one that significantly influences consequential decisions affecting consumers, such as those related to education, employment, financial services, and healthcare.

One of the standout features of the CAIA is its comprehensive mitigation strategies, which include a safe harbor provision for entities that adhere to the National Institute of Standards and Technology’s AI Risk Management (NIST AI RMF). This framework provides voluntary guidelines for managing AI risks throughout the AI system’s lifecycle, emphasizing principles such as reliability, safety, transparency, and fairness.

Moreover, the CAIA mandates that affected entities conduct annual impact assessments, ensuring thorough documentation and transparency regarding the data used, metrics considered, and user safeguards implemented.

Utah’s Approach to AI Regulation

Following suit, Utah has also adopted forward-thinking legislation with the Utah Artificial Intelligence Policy Act (UAIP), effective since May 2024. The UAIP aims to bolster consumer protections while fostering responsible AI innovation through:

  • Mandatory transparency via consumer disclosure requirements.
  • Clarification of liability for AI business operations.
  • The establishment of a regulatory sandbox to facilitate responsible AI development.

A unique aspect of the UAIP is its incorporation of regulatory mitigation agreements (RMAs), which allow AI developers to test their technologies in a controlled environment while addressing potential risks. The UAIP’s emphasis on cybersecurity is particularly noteworthy, as it seeks to establish standards for this critical area.

Other States Joining the Effort

As of the time of this article, several states, including Virginia, have introduced AI-related legislation, echoing the approaches taken by Colorado and Utah. Connecticut and New Mexico are also exploring new legislative measures to address AI challenges.

State attorneys general, such as AG Rosenblum of Oregon, have been vocal about the existing regulatory frameworks that apply to AI, underscoring the necessity for organizations to comply with laws governing consumer protection and unfair trade practices.

Preparing for the Future

While the landscape of AI regulation is still evolving, it is clear that states are laying the groundwork for future industry standards. Organizations must assess whether their policies and procedures are sufficiently robust to align with emerging legal obligations and voluntary guidance frameworks like the NIST AI RMF.

The proactive legislative actions by states reflect a growing recognition of the importance of managing AI risks while harnessing its potential for innovation and growth.

More Insights

AI Governance: Essential Insights for Tech and Security Professionals

Artificial intelligence (AI) is significantly impacting various business domains, including cybersecurity, with many organizations adopting generative AI for security purposes. As AI governance...

Government Under Fire for Rapid Facial Recognition Adoption

The UK government has faced criticism for the rapid rollout of facial recognition technology without establishing a comprehensive legal framework. Concerns have been raised about privacy...

AI Governance Start-Ups Surge Amid Growing Demand for Ethical Solutions

As the demand for AI technologies surges, so does the need for governance solutions to ensure they operate ethically and securely. The global AI governance industry is projected to grow significantly...

10-Year Ban on State AI Laws: Implications and Insights

The US House of Representatives has approved a budget package that includes a 10-year moratorium on enforcing state AI laws, which has sparked varying opinions among experts. Many argue that this...

AI in the Courts: Insights from 500 Cases

Courts around the world are already regulating artificial intelligence (AI) through various disputes involving automated decisions and data processing. The AI on Trial project highlights 500 cases...

Bridging the Gap in Responsible AI Implementation

Responsible AI is becoming a critical business necessity, especially as companies in the Asia-Pacific region face rising risks associated with emergent AI technologies. While nearly half of APAC...

Leading AI Governance: The Legal Imperative for Safe Innovation

In a recent interview, Brooke Johnson, Chief Legal Counsel at Ivanti, emphasizes the critical role of legal teams in AI governance, advocating for cross-functional collaboration to ensure safe and...

AI Regulations: Balancing Innovation and Safety

The recent passage of the One Big Beautiful Bill Act by the House of Representatives includes a provision that would prevent states from regulating artificial intelligence for ten years. This has...

Balancing Compliance and Innovation in Financial Services

Financial services companies face challenges in navigating rapidly evolving AI regulations that differ by jurisdiction, which can hinder innovation. The need for compliance is critical, as any misstep...