Colorado’s AI Act: Balancing Innovation and Regulation

The Colorado AI Act Shuffle: One Step Forward, Two Steps Back

Colorado waded into the deep end of AI regulation last year with the Colorado AI Act (Senate Bill 24-205), a sweeping law designed to rein in the risks of artificial intelligence (AI) and automated decision systems (ADS). Billed as a safeguard against AI running amok in high-stakes decisions – hiring, lending, housing, and more – the law sets out to manage the risks of AI while keeping innovation alive.

However, as with any ambitious legislation, particularly in the technology space, the rollout has been anything but smooth. Industry groups worry the Act is too rigid and vague, while consumer advocates argue it doesn’t go far enough. To sort it all out, Colorado’s Governor launched the Colorado Artificial Intelligence Impact Task Force, a group of policymakers, industry insiders, and legal experts tasked with identifying where the law works, where it doesn’t, and how to fix it.

After months of heated debates and deep dives into AI policy, the Task Force delivered its verdict in a February 2025 report. The findings? Some issues have clear solutions, others need more negotiation, and a few remain as controversial as a self-driving car without a steering wheel.

The Criticisms

The Colorado AI Act was hailed as groundbreaking, but not everyone was thrilled. Some of the biggest complaints regarding the first-of-its-kind legislation included:

  • Too Broad, Too Vague – Key terms like “algorithmic discrimination” and “consequential decisions” are open to interpretation, leaving businesses wondering whether they’re in compliance or on the chopping block;
  • A Raw Deal for Small Businesses – Some argue that the compliance burden falls disproportionately on smaller AI startups that lack the legal firepower of Big Tech;
  • Transparency vs. Trade Secrets – The law’s disclosure requirements have raised red flags in the private sector, with concerns that companies may be forced to reveal proprietary AI models and other confidential information;
  • Enforcement Nightmares – The attorney general’s authority and the law’s timeline for implementation remain points of contention. Some say the law moves too fast, others say it doesn’t have enough bite.

The AI Impact Task Force set out to smooth over these tensions and offer practical recommendations.

What the Task Force Found

Between August 2024 and January 2025, the Task Force heard from lawmakers, academics, tech leaders, consumer advocates, and government officials. Their report categorizes the AI Act’s issues into four groups:

1. Issues With Apparent Consensus on Proposed Changes

Some relatively minor tweaks have universal support, including:

  • Clarifying ambiguous AI-related definitions;
  • Adjusting documentation requirements for developers and deployers to avoid unnecessary red tape.

2. Issues Where Consensus on Changes Appears Achievable With Additional Time

Some concerns have merit, but the devil is in the details, needing more time and negotiation:

  • Redefining “consequential decisions” – the goal? Make sure the law targets actual high-risk AI applications without overreaching;
  • Fine-tuning exemptions – who exactly should be subject to the law? The answer isn’t simple and both industry concerns and consumer protections need to be balanced;
  • Timing and scope of AI impact assessments – when and how should companies be required to evaluate risk? The current deadlines and requirements might need adjustments to make compliance more practical.

3. Issues Where Consensus Depends on Implementation and Coordination

Some proposed changes can’t happen in isolation – they’re tangled up with other provisions. Therefore, while changes sparked interest, agreement hinges on broader tradeoffs. Examples include:

  • Reworking the definition of “algorithmic discrimination” rules without undermining consumer protections and enforceability;
  • Determining what AI-related data companies must share with the attorney general – and under what conditions;
  • Balancing risk management obligations against practical implementation challenges, including aligning deployer risk management requirements with impact assessment obligations.

4. Issues With Firm Disagreement

And then there are the hardcore battles, where the Task Force noted that industry groups, consumer advocates, and policymakers remain miles apart and have “firm disagreements” concerning proposed changes to the point that the Task Force is unable to make substantive recommendations:

  • The “Duty of Care” Dilemma – should AI developers and deployers have a formal responsibility to prevent harm, or should their obligations be less stringent?
  • The “Substantial Factor” Dilemma – how should the Colorado AI Act define what AI tools are subject to regulation?
  • The Small Business Exemption – should startups and smaller AI companies with fewer than 50 employees get a pass on some of the compliance requirements?
  • The “Fix-It” Window – should companies get a chance to correct violations (a “right to cure”) before enforcement kicks in?
  • Attorney General’s Rulemaking Power – how much control should the AG have over shaping AI regulation via rulemaking and enforcement?

The Bottom Line

The Colorado AI Act isn’t going away, but it’s likely to get some serious retooling. The Task Force’s report sketches out a roadmap for legislative refinements – starting with the easy fixes and working toward compromise on the stickier points.

The big takeaway? Colorado’s AI regulations are still a work in progress, and the battle over how to regulate AI – without stifling innovation – has only just begun. As Colorado stands at the forefront of AI regulation, this process isn’t just about one state’s laws – it’s a test case for how AI will be governed across the country. Expect more revisions, more debate, and plenty of lessons for other states watching from the sidelines.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...