Mile-high Machine Learning: New Policy Framework Significantly Alters Colorado AI Act
A new proposal from a group convened by Colorado’s Governor aims to repeal and replace the Colorado AI Act, shifting the focus from regulating high-risk AI systems to addressing consumer-facing rights and transparency obligations.
If adopted, this framework would significantly reduce compliance obligations for both AI developers and deployers, narrow the scope of regulated systems, and recalibrate liability exposure. However, questions remain about how existing discrimination and consumer protection laws will be enforced in practice.
Background
On March 17, 2026, nearly two years after the enactment of the Colorado Artificial Intelligence (AI) Act (CAIA), the Colorado AI Policy Work Group released its proposal. The CAIA, which was modeled on the EU AI Act, emphasizes a risk-based approach to AI regulation focused on “high-risk AI systems.”
These systems significantly influence consumer decisions in areas such as education, employment, financing, healthcare, and legal services. The CAIA requires developers to avoid algorithmic discrimination and mandates impact assessments and risk management plans.
Challenges and Industry Feedback
Prior to its enactment, industry groups expressed concerns that the CAIA could stifle innovation and disadvantage small businesses. Governor Polis and other officials called for changes informed by industry feedback, leading to the formation of the AI Policy Workgroup in October 2025. This group comprises diverse stakeholders, including technology companies and consumer advocates.
Proposal Timing and Context
The timing of this proposal aligns with various state and federal developments, including the Department of Commerce’s report on AI regulations and President Trump’s National AI Legislative Framework, which aims to establish a consistent federal policy.
New Framework Proposal
The Work Group’s proposal narrows the types of systems in scope, moving from a focus on high-risk AI systems to Covered ADMTs (Automated Decision-Making Technology) that materially influence consequential decisions. Activities related to advertising, marketing, and cybersecurity are excluded from this definition.
Structural Shift: From Risk-Based Governance to Transparency
This proposal transitions from a law resembling European AI regulation to one centered on notice and transparency. Developers will no longer have a duty of care but will be required to provide documentation regarding intended uses, data categories, limitations, and instructions for appropriate use.
Deployers: From Risk Management to Record Retention
Many explicit responsibilities for deployers from the CAIA will be removed. However, they will still need to provide notices to consumers about their use of ADMT systems and inform them of any adverse outcomes.
Liability, Enforcement, and Rulemaking
Similar to the CAIA, this proposal bars private rights of action but mandates the Attorney General to adopt rules concerning post-adverse disclosures. It also proposes a new structure for liability that allocates fault based on relative contribution to violations of existing law.
Practical Implications for Companies
While the proposal narrows the scope of AI governance, it does not eliminate exposure under existing discrimination, consumer protection, or privacy regimes. Companies must continue to monitor the technologies they use to avoid discriminatory practices.
Contractual Implications
As companies reassess their compliance programs, they should also review liability provisions in vendor agreements, as contractual clauses that reduce liability for discriminatory acts are void against public policy.
Conclusion
Although the proposal has garnered unanimous support from the Colorado AI Policy Work Group, it still faces legislative scrutiny before becoming law. Some Colorado legislators have expressed mixed reactions, indicating that further discussion and revision may be necessary.