Revising Colorado’s AI Law: A Shift Towards Consumer Transparency

Mile-high Machine Learning: New Policy Framework Significantly Alters Colorado AI Act

A new proposal from a group convened by Colorado’s Governor aims to repeal and replace the Colorado AI Act, shifting the focus from regulating high-risk AI systems to addressing consumer-facing rights and transparency obligations.

If adopted, this framework would significantly reduce compliance obligations for both AI developers and deployers, narrow the scope of regulated systems, and recalibrate liability exposure. However, questions remain about how existing discrimination and consumer protection laws will be enforced in practice.

Background

On March 17, 2026, nearly two years after the enactment of the Colorado Artificial Intelligence (AI) Act (CAIA), the Colorado AI Policy Work Group released its proposal. The CAIA, which was modeled on the EU AI Act, emphasizes a risk-based approach to AI regulation focused on “high-risk AI systems.”

These systems significantly influence consumer decisions in areas such as education, employment, financing, healthcare, and legal services. The CAIA requires developers to avoid algorithmic discrimination and mandates impact assessments and risk management plans.

Challenges and Industry Feedback

Prior to its enactment, industry groups expressed concerns that the CAIA could stifle innovation and disadvantage small businesses. Governor Polis and other officials called for changes informed by industry feedback, leading to the formation of the AI Policy Workgroup in October 2025. This group comprises diverse stakeholders, including technology companies and consumer advocates.

Proposal Timing and Context

The timing of this proposal aligns with various state and federal developments, including the Department of Commerce’s report on AI regulations and President Trump’s National AI Legislative Framework, which aims to establish a consistent federal policy.

New Framework Proposal

The Work Group’s proposal narrows the types of systems in scope, moving from a focus on high-risk AI systems to Covered ADMTs (Automated Decision-Making Technology) that materially influence consequential decisions. Activities related to advertising, marketing, and cybersecurity are excluded from this definition.

Structural Shift: From Risk-Based Governance to Transparency

This proposal transitions from a law resembling European AI regulation to one centered on notice and transparency. Developers will no longer have a duty of care but will be required to provide documentation regarding intended uses, data categories, limitations, and instructions for appropriate use.

Deployers: From Risk Management to Record Retention

Many explicit responsibilities for deployers from the CAIA will be removed. However, they will still need to provide notices to consumers about their use of ADMT systems and inform them of any adverse outcomes.

Liability, Enforcement, and Rulemaking

Similar to the CAIA, this proposal bars private rights of action but mandates the Attorney General to adopt rules concerning post-adverse disclosures. It also proposes a new structure for liability that allocates fault based on relative contribution to violations of existing law.

Practical Implications for Companies

While the proposal narrows the scope of AI governance, it does not eliminate exposure under existing discrimination, consumer protection, or privacy regimes. Companies must continue to monitor the technologies they use to avoid discriminatory practices.

Contractual Implications

As companies reassess their compliance programs, they should also review liability provisions in vendor agreements, as contractual clauses that reduce liability for discriminatory acts are void against public policy.

Conclusion

Although the proposal has garnered unanimous support from the Colorado AI Policy Work Group, it still faces legislative scrutiny before becoming law. Some Colorado legislators have expressed mixed reactions, indicating that further discussion and revision may be necessary.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...