DOJ Teams with xAI to Challenge Colorado AI Act

DOJ Joins xAI in Lawsuit Challenging Colorado AI Act

Background and Federal Stance

For much of the past year, the Trump Administration signaled opposition to state AI regulations, arguing they could stifle innovation and hinder the United States’ ability to lead the AI race. In line with this approach, the U.S. Department of Justice (DOJ) intervened on April 24, 2026, in a lawsuit filed by xAI that challenges Colorado’s SB 24‑205, commonly known as the Colorado AI Act.

Key Claims by the DOJ

The DOJ alleges the Colorado AI Act violates the Equal Protection Clause in two distinct ways:

  • 1. Compelled Discrimination: The Act imposes disparate-impact liability that forces developers and deployers to alter AI model outputs, effectively requiring discrimination based on protected characteristics such as race, sex, or religion.
  • 2. Authorized Discrimination: A provision (Section 6-1-1701(1)(b)) exempts AI systems whose sole purpose is to “expand an applicant, customer, or participant pool to increase diversity or redress historical discrimination,” thereby permitting intentional discrimination.

Colorado AI Act Overview

Enacted in May 2024, the Colorado AI Act targets “high-risk” AI systems—those that substantially influence consequential decisions in employment, housing, financial lending, and education. Core requirements include:

  • Developers must disclose training data, system design, and potential algorithmic discrimination risks.
  • Deployers must implement risk-management policies, conduct impact assessments, and notify consumers when AI systems affect consequential decisions.

Critics across the political spectrum, including Governor Jared Polis and Attorney General Phil Weiser, argue that a patchwork of state regulations threatens the growth of a strong technology sector. The Act’s effective date has been delayed to June 30, 2026, but it remains poised for enforcement.

xAI’s Lawsuit

On April 9, 2026, Elon Musk’s xAI filed suit in the U.S. District Court for the District of Colorado, seeking an injunction on several grounds:

  • Violation of the First Amendment by compelling speech and burdening users’ speech rights.
  • Violation of the Commerce Clause by regulating out-of-state actors and impeding interstate commerce.
  • Unconstitutionally vague provisions.
  • Violation of the Equal Protection Clause.

xAI previously challenged California’s AB 2013, a transparency law for AI training data, which remains pending before the Ninth Circuit Court of Appeals.

DOJ Intervention Details

The DOJ’s filing argues that the Act’s disparate-impact liability and its diversity-exemption provision together “compel” and “authorize” discrimination, respectively. Assistant Attorney Generals Harmeet K. Dhillon and Brett A. Shumate framed the Act as promoting a “radical, far-left worldview” that conflicts with the Constitution and national security interests.

Potential Implications

The DOJ’s involvement could signal a broader federal strategy to preempt state AI regulations through litigation, echoing President Trump’s December executive order that called for an AI Litigation Task Force. While the DOJ’s press release did not reference the Task Force, the intervention aligns with its directive to challenge state AI laws on constitutional and commerce grounds.

Future outcomes may include:

  • Renewed efforts to amend or further delay the Colorado AI Act.
  • Potential non-enforcement by Colorado’s Attorney General pending rulemaking.
  • Increased likelihood of other states facing similar federal challenges.

Looking Ahead

Developers and deployers of high-risk AI systems should monitor this litigation closely, as the resolution will shape compliance obligations not only in Colorado but potentially across the United States. The DOJ’s alignment with private actors like xAI may also encourage additional lawsuits, amplifying uncertainty for AI stakeholders.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...