Some States Step Up Early to Regulate AI Risk Management
In the face of an escalating global AI arms race, U.S. states are emerging as pivotal players in the regulation of AI risk management. As the landscape shifts, it is evident that states like Colorado and Utah are taking proactive measures to legislate the use of AI technologies with a focus on consumer protection.
Key Takeaways
- A global AI arms race presents an opportunity for U.S. states to regulate AI risks effectively.
- Colorado and Utah have enacted significant legislation governing AI consumer interactions.
- Other states are emphasizing existing laws that can address AI-related issues.
In recent developments, an executive order from 2023 concentrating on AI safety was swiftly replaced by a new directive emphasizing U.S. leadership in AI innovation. Meanwhile, the European Union has shifted its focus from a liability directive to a more ambitious 2025 work program aimed at enhancing competitiveness.
The rapid advancement of AI technologies can lead to unintended consequences, as demonstrated by the rise and subsequent downfall of DeepSeek, an open-source large language model that was quickly hacked and discredited. This incident underscores the challenges posed by a “move fast and break things” mentality in technology development.
Legislation in Colorado
The Colorado Artificial Intelligence Act (CAIA), set to take effect on February 1, 2026, targets developers and deployers of high-risk AI systems. A high-risk AI system is defined as one that significantly influences consequential decisions affecting consumers, such as those related to education, employment, financial services, and healthcare.
One of the standout features of the CAIA is its comprehensive mitigation strategies, which include a safe harbor provision for entities that adhere to the National Institute of Standards and Technology’s AI Risk Management (NIST AI RMF). This framework provides voluntary guidelines for managing AI risks throughout the AI system’s lifecycle, emphasizing principles such as reliability, safety, transparency, and fairness.
Moreover, the CAIA mandates that affected entities conduct annual impact assessments, ensuring thorough documentation and transparency regarding the data used, metrics considered, and user safeguards implemented.
Utah’s Approach to AI Regulation
Following suit, Utah has also adopted forward-thinking legislation with the Utah Artificial Intelligence Policy Act (UAIP), effective since May 2024. The UAIP aims to bolster consumer protections while fostering responsible AI innovation through:
- Mandatory transparency via consumer disclosure requirements.
- Clarification of liability for AI business operations.
- The establishment of a regulatory sandbox to facilitate responsible AI development.
A unique aspect of the UAIP is its incorporation of regulatory mitigation agreements (RMAs), which allow AI developers to test their technologies in a controlled environment while addressing potential risks. The UAIP’s emphasis on cybersecurity is particularly noteworthy, as it seeks to establish standards for this critical area.
Other States Joining the Effort
As of the time of this article, several states, including Virginia, have introduced AI-related legislation, echoing the approaches taken by Colorado and Utah. Connecticut and New Mexico are also exploring new legislative measures to address AI challenges.
State attorneys general, such as AG Rosenblum of Oregon, have been vocal about the existing regulatory frameworks that apply to AI, underscoring the necessity for organizations to comply with laws governing consumer protection and unfair trade practices.
Preparing for the Future
While the landscape of AI regulation is still evolving, it is clear that states are laying the groundwork for future industry standards. Organizations must assess whether their policies and procedures are sufficiently robust to align with emerging legal obligations and voluntary guidance frameworks like the NIST AI RMF.
The proactive legislative actions by states reflect a growing recognition of the importance of managing AI risks while harnessing its potential for innovation and growth.