According to a recent court filing, Colorado’s Attorney General does not intend to enforce the Colorado Artificial Intelligence Act (CAIA) after the Colorado legislature again delayed its effective date. While this announcement reduces the pressure of immediate compliance with CAIA, organizations should continue to monitor federal developments and evaluate applicable legal requirements under other AI laws as well as privacy requirements and obligations governing automated decision-making under other state AI laws like those found in California, Texas, and Utah.
The Colorado AG’s court filing was in response to the first enforcement action under President Trump’s Exec. Order 14365, Ensuring a National Policy Framework for Artificial Intelligence. The U.S. Department of Justice recently intervened in a lawsuit challenging Colorado’s CAIA. The DOJ’s intervention is in line with the current administration’s stated positions and the timeline set forth in the EO.
In response, Colorado stipulated to stay any enforcement of CAIA. Separately from this responsive filing, earlier this spring the Colorado AI Policy Working Group published an outline for legislation to repeal and replace the CAIA; that proposal has not been introduced in the legislature, and Colorado’s legislative session ends on May 13. Pending this new proposed law and pursuant to the stipulation, the Colorado Attorney General does not intend to enforce the CAIA until after it finishes rulemaking under the existing CAIA or any replacement AI bill.
Colorado’s law was the first comprehensive law in the US regulating AI development and deployment. Set to take effect in June 2026 after an initial delay, the CAIA, among other things, would have imposed disclosure, reporting, and risk‑mitigation obligations on companies that develop or use “high risk” AI systems relating to employment, housing, education, healthcare, legal services, and financial services.
The DOJ asserted in its intervenor complaint that the CAIA violated the Equal Protection Clause of the Fourteenth Amendment because its restrictions on “high risk” activities exempts practices intended “to increase diversity or redress historical discrimination.” According to the DOJ, this structure would have impermissibly authorized differential treatment based on protected classifications.
For clients that create or use high-risk AI systems or algorithmic decision‑making tools, the case signals how this administration will increase federal scrutiny of state AI laws via EO 14365 and how states may respond. Organizations operating in or across multiple states need to monitor this litigation, as it will affect compliance strategies, particularly where state AI laws impose obligations that differ from emerging federal enforcement priorities. The Womble Bond Dickinson Digital Solutions team will follow this litigation and regulatory developments closely and provide regular updates.

/Passle/678034865f458907b06ca7a9/SearchServiceImages/2026-04-15-17-36-19-486-69dfcc93477862f605ab819e.jpg)
/Passle/678034865f458907b06ca7a9/SearchServiceImages/2026-04-14-15-36-20-606-69de5ef439776dfb165c8152.jpg)
/Passle/678034865f458907b06ca7a9/SearchServiceImages/2026-04-10-14-51-42-067-69d90e7e675f49a6a1d37c40.jpg)