The recent failed push by members of Congress to freeze artificial intelligence laws at the state level has left U.S. companies with national operations facing a patchwork of rules on the technology.
A Colorado AI statute set to go into effect next year stands out for its breadth and comprehensiveness, analysts say. The law requires businesses to adopt risk management programs for high-risk AI systems, including impact assessments, oversight processes and mitigation strategies.
“Colorado is the first and only state that has a comprehensive AI law… like what we’ve seen in the EU,” said Tyler Thompson, a Denver-based partner at Reed Smith, a global law firm headquartered in Pittsburgh. “A lot of other state laws in the U.S. are very niche,” he said.
During the final stages of the so-called Big Beautiful Bill Act’s passage, the Senate stripped a provision that would have barred states from regulating AI for the next 10 years, opening the floodgates to state-level legislation, CFO Dive previously reported.
In contrast with Colorado’s approach, most state AI laws enacted to date have been piecemeal and focused on narrow applications, according to analysts. Existing state regulations typically apply more narrowly to sectors like healthcare or to particular AI uses such as deepfakes, according to Thompson.
The Colorado legislation takes effect on Feb. 1, 2026, and companies need to plan for the compliance burden and the possibility that other states might follow with similar laws, Thompson said. Like the EU AI Act, the law applies to both developers and deployers of AI systems.
Under the Colorado law, “high risk” AI systems — those used to make “consequential decisions” in areas like education, employment, lending, health care and insurance — must be governed by formal risk management frameworks. These approaches need to be disclosed to the attorney general and, in some cases, to consumers, particularly if the company becomes aware that a high-risk AI system has caused algorithmic discrimination.
“This is a real lift. This is something that your compliance teams are not used to,” he said. “There's a lot of requirements. If you [start to] to try to get this in place in January, it’s just not going to work.”
Layered requirements for deployers and developers add complexity to compliance. For example, deployers must conduct impact assessments and notify consumers about the risk management practices tied to their use of AI. Meanwhile, developers must demonstrate how they address algorithmic discrimination and publish details about their systems and risk management approaches.
“If you trigger it, the compliance requirements are really high,” Thompson said. “There’s a lot of documentation, and a lot of things — whether you’re the developer or the deployer — that you have to do [in coordination with] that other side.”
The statute includes exemptions for small deployers, federally regulated AI systems, research activities and certain lower-risk AI technologies.
Thompson suggests using the National Institute of Standards and Technology (NIST) AI Risk Management Framework as a foundation for an AI compliance program and as a legal defense against enforcement actions under the Colorado AI Act.
The law doesn’t spell out monetary penalties, but states that violations are considered unfair trade practices under Colorado’s consumer protection laws. Under existing law, each violation of an unfair trade practice can carry a civil penalty of up to $20,000.
The legislation could still be amended before it takes effect, potentially narrowing its scope. According to Thompson, states may take one of two paths: continue regulating AI in a piecemeal fashion, or follow Colorado’s lead by enacting their own comprehensive AI laws. Some jurisdictions — such as New York or California — may develop their own broad frameworks, he said.
“[If] the Colorado legislature fixes the Colorado AI Act — either reduces the burdens or, at least, begins to add more clarity and color on it, [making] it more usable and attractive… then it becomes, I think, what Colorado is hoping for: this becomes the model,” he said. “This is going to start the wave, and a lot of other states could consider comprehensive AI legislation.”