How will the new Colorado AI Act impact the regulation landscape?

With enforcement set to begin in February 2026, the Colorado AI Act is the first comprehensive state-level AI governance law in the country - with other states following suit. In 2023, legislators in 31 states introduced at least 191 AI-related bills—a 440% increase from 2022. While other states like California have struggled with resistance from tech industry lobbyists, Colorado was able to pass this Act, positioning them as a leader in AI governance. 


Scope of the Act

The Colorado Act targets “high-risk” AI systems that make “consequential decisions”, as a means to prevent these systems from furthering algorithmic discrimination. 

“High-risk AI systems” are those that significantly influence decisions related to education, employment, finance, healthcare, housing, insurance, legal services, and essential government services. 

“Consequential decisions” have a material legal or a similarly significant effect on the provision or denial of services or opportunities, or the cost and terms of them.

The law contains duties for both deployers and developers of high-risk AI systems:

  • Deployers: Businesses in Colorado that deploy or use a high-risk AI system.

  • Developers: Businesses in Colorado that develop or substantially modify a covered AI system.

Unlike state privacy laws, there is no threshold for the Colorado AI Act’s application; it applies regardless of the number of Colorado consumers impacted or the operating revenue of the business. 

However, some small businesses are exempt from certain deployer responsibilities if they meet specific criteria - those with less than 50 employees, that limit the use of high-risk AI systems and do not train high-risk systems with their own data. 

The Colorado Attorney General has exclusive enforcement authority and can create additional rules to ensure compliance. Violations of the Act are treated as breaches of Colorado's consumer protection laws, with penalties of up to $20,000 per violation.


Duties on developers and deployers of high-risk systems

Deployers of high-risk AI systems must:

  • Implement and maintain comprehensive risk management policies.

  • Perform annual and modification-triggered impact assessments to evaluate algorithmic discrimination risks. 

  • Notify consumers about the use of high-risk AI systems in making consequential decisions.

  • Report to the Attorney General any algorithmic discrimination incidents within 90 days.

  • Provide data correction opportunities and facilitate appeals, to allow consumers to correct any incorrect personal data processed by the AI system and provide mechanisms for appeal. 

Developers of high-risk AI systems must:

  • Protect consumers from known or foreseeable risks of algorithmic discrimination.

  • Provide detailed information about the AI system, including data types used for training, known limitations, intended uses, and risks of algorithmic discrimination.

  • Publish summaries of high-risk AI systems and their risk management measures.


How can businesses prepare for the Colorado AI Act?

First, businesses need to develop comprehensive documentation, compiling all necessary information about high-risk AI systems and ensuring it is readily available. Robust risk management programs are also crucial, and these should align with recognized frameworks such as the NIST AI Risk Management Framework.

Public disclosures are another important aspect. Businesses should draft and regularly update public statements regarding their high-risk AI systems and the measures in place for managing associated risks. In addition, they should develop impact assessments for the high-risk AI systems they deploy. This involves preparing to request more detailed information from developers as necessary.

To comply with reporting obligations, businesses need to spot algorithmic discrimination incidents. This may involve conducting independent audits and external checks. Clear benchmarks and definitions of discriminatory practices are essential for effective implementation and compliance.

Lastly, businesses must establish processes to review each high-risk AI system for algorithmic discrimination at least once annually, or more frequently if there are significant changes in the system or its use. 


How does the act compare to the EU AI Act?

International businesses operating in the EU will already have a head start in complying with the Colorado AI Act due to similarities in regulatory frameworks. However, there are also notable distinctions. 

For one, the qualification of high-risk AI systems differs. While both Acts cover areas like education, employment, financial services, and government services, the Colorado AI Act uniquely includes AI systems in housing and legal services. 

On the other hand, the EU AI Act encompasses additional areas such as biometrics, emotion recognition, law enforcement, migration and border control, democratic processes, and the administration of justice. 

The Colorado AI Act also places significant responsibilities on deployers of high-risk AI systems, while the EU AI Act primarily imposes risk-management requirements on providers rather than deployers. This means that businesses deploying AI in Colorado will need to adhere to stricter oversight and compliance measures.

The Colorado AI Act mandates transparency toward individuals and provides the right to appeal adverse consequential decisions arising from the deployment of an AI system. In contrast, the EU AI Act requires the explanation of decisions made based on high-risk AI outputs, to ensure transparency by providers to deployers and human oversight.


Will the Act stand the test of time?

Some stakeholders worry that the Act might stifle innovation, particularly for small businesses struggling to meet its stringent requirements. Governor Polis has not ignored these concerns; he actively encouraged lawmakers to refine the bill before its implementation to strike a better balance between regulation and innovation. The Act is actively designed with mechanisms to ensure its long-term relevance and flexibility in the face of rapid technological change. Key provisions include regular revisions and updates, with the Colorado legislature committed to revising the Act before its major provisions take effect in February 2026. 

A dedicated task force will continuously study the Act, proposing necessary amendments - these will involve consultations with expert committees and public stakeholders, to ensure the Act evolves in a balanced manner. Mechanisms such as regular impact assessments, incident reporting, and public disclosures, will provide continuous feedback on the Act’s effectiveness.

The Colorado Attorney General also has the authority to issue guidelines and regulations to clarify and enforce the Act’s provisions, adjusting these regulations to address emerging challenges and technologies.


The Colorado AI Act sets a new precedent in AI governance, emphasizing transparency, accountability, and the prevention of algorithmic discrimination. By understanding and adhering to the Act’s requirements, businesses can ensure compliance and foster trust and innovation in AI technologies. As the enforcement date approaches, proactive measures and strategic planning will be crucial for developers and deployers navigating this new regulatory landscape.


Need support to comply with the new AI Act?

Previous
Previous

The European Health Data Space (EHDS): A new era for EU health data management

Next
Next

The Colorado Data Privacy Act (CPA): a game-changer for consumer protection