Colorado AI Regulations Overhaul: A New Consensus Emerges

Working Group Reaches Consensus on Fixes to Colorado AI Regulations

Five months after Governor Jared Polis convened business, education, and consumer-rights advocates to resolve a two-year stalemate over artificial intelligence (AI) regulation, the group has successfully devised a plan aimed at preventing discrimination by AI systems.

The agreement, announced by the Democratic governor, is expected to facilitate the introduction of a consensus AI regulatory framework in the legislature, which will replace the controversial rules that Polis reluctantly signed into law in May 2024. These rules, set to go into effect at the end of June, will be supplanted by a new strategy requiring the Colorado Attorney General’s Office to finalize rulemaking by the end of 2026, shortly before the current AG Phil Weiser leaves office.

Background on Colorado’s 2024 Law

The 2024 law was heralded as the most comprehensive AI regulation in the United States but faced immediate backlash due to provisions that tech companies argued could drive AI developers out of the state. Notable criticisms included:

  • Detailed disclosures required from deployers and developers regarding the risk of discrimination by AI.
  • A cumbersome review process allowing consumers to petition for reconsideration of adverse AI decisions in areas such as job applications and insurance approvals.

Senate Majority Leader Robert Rodriguez, who sponsored the 2024 law, attempted to amend the problematic aspects through a task force but ultimately abandoned the effort due to lack of support. Subsequent attempts to rewrite the regulations in the 2025 special session failed, leading to a postponement of the law’s enactment.

Formation of the Working Group

In response to ongoing challenges, Governor Polis assembled a working group in October to address areas of disagreement. The group facilitated discussions between business and technology representatives and civil rights and labor advocates. As the 2026 legislative session approached, the working group focused on critical issues such as liability and appeals, ultimately garnering unanimous support for a new AI regulatory framework.

Key Features of the Proposed Framework

The new framework imposes several significant requirements on AI developers and deployers:

  • Notification Requirements: AI developers must inform deployers about the operational mechanics of the AI system and any known risks, particularly when the system is expected to make consequential decisions.
  • Consequential Decisions: When automated decision-making technologies are employed, deployers must provide clear notice to affected individuals. These decisions may pertain to:
    • Educational enrollment or opportunities
    • Employment opportunities
    • Real estate transactions
    • Financial or lending decisions
    • Insurance underwriting and claims
    • Healthcare services
    • Eligibility for government services
  • If an AI system makes an adverse decision, the deployer is required to provide a detailed explanation within 30 days and offer a process for human review.

The framework also allows for appeals, but the process is designed to be “commercially reasonable,” limiting potential abuse by consumers. Enforcement authority is assigned exclusively to the Attorney General’s office, which can impose civil penalties for violations, while developers are given a 90-day window to rectify any issues.

Liability and Compliance Provisions

One of the contentious points was liability. The new framework proposes an allocation of fault between developers and deployers based on their respective roles in any violations. Developers will be absolved of fault if deployers misuse the AI system contrary to its intended use. Additionally, the framework does not create a new private right of action, preventing individuals from filing civil lawsuits based on the regulations.

Conclusion

The consensus reached by the AI Policy Working Group marks a significant step toward resolving the criticisms surrounding Colorado’s AI regulations. The proposed framework aims to balance consumer protection and innovation while ensuring that the AI sector remains viable in the state. As the legislative process unfolds, stakeholders hope that the core principles established will remain intact, paving the way for a more rational approach to AI regulation.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...