Colorado’s AI Law Faces Major Overhaul Before Deadline

Colorado’s AI Law: Upcoming Changes and Employer Implications

The Colorado Artificial Intelligence Act (AI Act), enacted in 2024, is set to become effective on June 30, 2026. As the deadline approaches, the Colorado AI Policy Work Group, backed by Governor Jared Polis, released a proposed framework on March 17, 2026, that would replace much of the original Act with a streamlined regime and push the effective date to January 1, 2027.

Key Takeaways from the Proposal

Scope Reduction: The proposal narrows regulation to certain “automated decision‑making tools” (ADMT) used in “consequential decisions” that materially influence outcomes for consumers, employees, or applicants. It excludes incidental, trivial, or clerical uses, as well as tools like spellcheckers, calculators, and spreadsheets that require human analysis.

Eliminated Obligations: Employers would no longer need to implement risk‑management programs, conduct impact assessments, annual reviews, or report algorithmic discrimination—requirements that many have found onerous under the current AI Act.

Notice and Disclosure Requirements: The proposal introduces a two‑step notice system. Deployers must provide prior notice that covered ADMT is being used, potentially via a public link near the point of interaction. After an adverse consequential decision, deployers must disclose, within 30 days, a plain‑language description of the decision, details about the ADMT (name, version, developer, data sources), instructions for data access and correction under Colorado privacy law, and how to request meaningful human review.

Enforcement Changes: Enforcement remains with the Colorado Attorney General, but the proposal removes a private right of action. The AG must give written notice of a violation and a 90‑day cure period; if cured, civil penalties are not available for that specific violation.

What Remains Uncertain

While the proposal offers a more employer‑friendly framework, it is not yet law. The original AI Act remains in force and will still apply on June 30, 2026 unless the proposal is enacted. Employers should therefore continue preparing for compliance with the current Act while monitoring legislative developments.

Practical Steps for Employers

1. Assess Current ADMT Use: Identify which automated tools fall under the definition of “covered ADMT” and evaluate whether they materially influence consequential decisions.

2. Develop Notice Mechanisms: Prepare public notice statements and placement strategies for pre‑use disclosure.

3. Plan for Post‑Decision Disclosures: Draft templates that include the required information for adverse outcomes, ensuring they can be delivered within the 30‑day window.

4. Monitor Legislative Updates: Track the progress of the proposal and any rulemaking that may clarify disclosure elements.

5. Maintain Compliance Pathways: Continue building risk‑management policies, impact assessments, and discrimination reporting processes to meet the existing AI Act requirements until the new framework, if adopted, takes effect.

Conclusion

The Colorado AI Policy Work Group’s proposal represents a significant shift toward a lighter regulatory regime for AI, focusing on transparency and notice rather than extensive risk‑management obligations. However, until the proposal is formally enacted, the comprehensive AI Act remains effective on June 30, 2026. Employers should adopt a dual‑track approach: prepare for the current law while staying ready to transition to the proposed framework should it become law.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...