From tracking every state's response to the NAIC AI Model Bulletin since its December 2023 adoption, the pattern has been consistent: states start broad, the industry pushes back, and the resulting law is narrower than the original draft. Colorado just compressed that cycle into less than two years. On May 9, 2026, the Colorado General Assembly passed SB 26-189 with votes of 57-6 in the House and 34-1 in the Senate, gutting the mandatory bias audit requirements from its landmark 2024 AI law (SB 24-205) and replacing them with a transparency and disclosure framework built around "covered ADMT" (Automated Decision-Making Technology). Governor Polis has confirmed he will sign the bill.
For insurers, the rewrite creates a compliance picture that is simultaneously simpler and more confusing. The general AI law's most burdensome requirements, including pre-deployment bias audits, algorithmic governance programs, and risk impact assessments, are gone. But the insurance-specific regulations under the Colorado Division of Insurance, including the SB 21-169 quantitative testing framework and Regulation 10-1-1 governance requirements, remain fully in force. The July 1, 2026 annual compliance report deadline for auto and health benefit plan insurers has not moved. Carriers now face a dual-track regulatory environment where the general AI statute asks less and the insurance-specific rules continue to ask for everything the original law would have demanded.
What SB 24-205 Originally Required
To understand what Colorado just abandoned, recall what the original law put on the table. SB 24-205, signed in May 2024, was the first US state law to impose comprehensive, risk-based AI governance obligations on private sector developers and deployers of "high-risk artificial intelligence systems." The statute covered any AI system that was a substantial factor in making a consequential decision concerning consumers across employment, housing, lending, insurance, healthcare, education, and government services.
For deployers (the category that captures most insurers), SB 24-205 required:
- A documented risk management policy and program: an iterative process identifying, documenting, and mitigating known or reasonably foreseeable risks of algorithmic discrimination, specifying the principles, processes, and personnel involved.
- Annual impact assessments for each high-risk AI system, plus updates within 90 days of any substantial modification, covering purpose, benefits, data categories, outputs, and misuse analysis.
- Pre-decision consumer notices at or before the time a consequential decision was made, describing the system's purpose and the decision's nature.
- A public statement describing the categories of high-risk AI systems in use and how the deployer manages risks.
- 90-day disclosure to the Attorney General upon discovering algorithmic discrimination.
For developers (including AI vendors), SB 24-205 imposed parallel obligations: bias testing documentation, known risk disclosures, and support for deployers' impact assessments. Critically, the law included a carve-out that exempted discriminatory algorithms designed to advance "diversity" or "redress historic discrimination," a provision that drew a federal constitutional challenge.
The statute also contained an insurance safe harbor under §10-3-1104.9 of the Colorado Revised Statutes, exempting insurers from CAIA deployer obligations where they were already complying with DOI rules governing external consumer data, algorithms, and predictive models.
What SB 26-189 Replaces It With
SB 26-189 repeals SB 24-205 in its entirety and enacts a fundamentally different regulatory model. The shift moves from a preventive framework (audit and assess before you deploy) to a reactive framework (deploy, then explain when things go wrong). The new law centers on "covered ADMT," a narrower concept than the original "high-risk AI system," defined as technology that processes personal data and uses computation to generate predictions, recommendations, classifications, rankings, scores, or other outputs used to "materially influence" a "consequential decision."
The covered domains remain broadly the same: employment, education, housing, financial services, insurance, healthcare, residential real estate, and essential government services. But the obligations are structurally different:
| Requirement | SB 24-205 (Repealed) | SB 26-189 (New Law) |
|---|---|---|
| Pre-deployment bias audits | Mandatory | Eliminated |
| Risk impact assessments | Annual, plus 90-day update after modification | Eliminated |
| Governance framework | Documented risk management program required | No standalone governance requirement |
| Consumer notice | Pre-decision, at or before consequential decision | Point-of-interaction notice that ADMT is in use |
| Post-adverse outcome disclosure | Not specifically required as standalone | Within 30 days: plain-language explanation of ADMT's role, data types used, consumer rights |
| Human review | Appeal right where technically feasible | "Meaningful human review and reconsideration, to the extent commercially reasonable" |
| Data correction rights | Limited | Right to request personal data used and correct inaccuracies |
| Recordkeeping | Impact assessments retained | Three-year retention of compliance records |
| Enforcement | AG enforcement plus private right of action | AG enforcement only; no private right of action |
| Cure period | Limited | 60-day notice and cure (expires January 1, 2030) |
| DEI carve-out | Exempted diversity-promoting algorithms | Removed entirely |
| Federal entity exemption | Conditional exemptions for some regulated entities | Eliminated: broader coverage |
| Effective date | June 30, 2026 (after delay) | January 1, 2027 |
Developer Obligations Under the New Framework
SB 26-189 preserves a developer tier, but the obligations are disclosure-oriented rather than governance-oriented. Starting January 1, 2027, developers of covered ADMT must provide deployers with:
- Technical documentation covering intended uses and known harmful uses
- Categories of training data employed
- Known limitations of the system
- Guidance on appropriate use and human review processes
- Notice of material product updates
The practical effect for AI vendors serving insurers is that vendor documentation requirements survive the rewrite, even as the requirement for vendors to conduct their own bias audits disappears. Vendors must still produce the technical artifacts that deployers need for compliance. The difference is that those artifacts are disclosure documents rather than audit workpapers.
A significant new provision targets vendor contracts directly. Under SB 26-189, any contract clause that purports to indemnify, defend, or hold harmless a party from liability for their own discriminatory use of ADMT in consequential decisions is void as against public policy. This renders unenforceable many standard AI vendor indemnification clauses covering consequential decisions affecting Colorado consumers. The exception is narrow: developers are not liable where the deployer used the technology outside its documented intended use and the developer met its documentation requirements. Carriers should expect contract renegotiation cycles with AI vendors before January 2027.
Why the Legislature Retreated From Audits
The rewrite did not happen in a vacuum. Three forces converged to push the legislature away from the mandatory audit model.
Industry pushback on compliance costs. Governor Polis formed a working group in August 2025 comprising representatives from the tech industry, insurers, labor organizations, and civil rights groups. The Colorado Sun reported that the working group spent six months and hundreds of hours negotiating the replacement framework. The Colorado Technology Association, whose members include carriers and insurtechs operating in the state, emphasized that the original law's audit and governance requirements would drive AI development investment out of state. Senate Majority Leader Rodriguez characterized the final product as a compromise: "Everybody lost and everybody won."
The xAI litigation and DOJ intervention. On April 9, 2026, Elon Musk's xAI filed suit in US District Court for the District of Colorado (xAI LLC v. Weiser, No. 1:26-cv-01515) seeking to enjoin enforcement of SB 24-205. The Department of Justice intervened on April 24, arguing that the law's requirement to prevent "algorithmic discrimination" effectively compelled race- and sex-conscious engineering in violation of the Equal Protection Clause. On April 27, the court granted a joint motion temporarily suspending enforcement of the original law. The DEI carve-out that the DOJ specifically challenged, which exempted algorithms designed to advance diversity, is absent from SB 26-189. The lawsuit's practical effect was to accelerate the legislative timeline for the rewrite that was already in progress through the Governor's working group.
Technical feasibility concerns. The original law's bias audit requirements assumed a level of standardized audit methodology that does not yet exist. No consensus framework for auditing insurance AI systems had emerged by 2026. The Consumer Finance Monitor analysis noted that the move away from audits reflects recognition that prescribing audit procedures for rapidly evolving AI systems creates compliance obligations that become obsolete faster than regulators can update them.
The Dual-Track Problem: Insurance-Specific Rules Remain in Force
This is where the actuarial compliance picture gets complicated. SB 26-189 softens the general AI statute, but it does not touch the insurance-specific regulatory framework that Colorado has been building since 2021 under a separate statutory authority.
The Colorado Division of Insurance has its own AI governance track, rooted in SB 21-169 (2021) and implemented through amended Regulation 10-1-1. This framework requires:
- Board-level oversight of a risk management framework for external consumer data and information sources (ECDIS) and algorithms and predictive models (APMs)
- Senior management accountability with a designated responsible officer
- Cross-functional governance groups with legal and compliance representation
- Written policies and procedures for design, development, testing, deployment, and monitoring of ECDIS and algorithms
- Quantitative testing for unfair discrimination with respect to race, with ongoing model drift monitoring
- Complete ECDIS inventory with version control
- Consumer complaint protocols enabling meaningful action on adverse decisions
- Third-party vendor selection documentation
- Annual governance structure review
The September 2025 amendment to Regulation 10-1-1 expanded its reach to include private passenger auto and health benefit plan insurers, joining life insurers who were already covered. The Debevoise analysis of the expansion noted that auto and health insurers had to submit interim progress reports by December 1, 2025, with full annual compliance reports due July 1, 2026, and annually thereafter.
The critical observation is that Regulation 10-1-1 requires exactly the governance framework, the bias testing, and the impact assessment infrastructure that SB 26-189 has just eliminated from the general AI law. Colorado insurers are not getting relief from these obligations. They are getting relief from duplicative obligations under a second, overlapping statute. The practical burden remains the same for any insurer that was already building a Reg 10-1-1 compliance program.
The Insurance Safe Harbor Survives the Rewrite
SB 26-189 preserves the §10-3-1104.9 insurance safe harbor. Insurers and affiliated entities subject to §10-3-1104.9 are deemed in compliance with SB 26-189 for the practice of insurance. This is a meaningful carryover from the original law: carriers that comply with DOI rules governing ECDIS and algorithms do not owe separate ADMT compliance obligations under the new general statute for their insurance activities.
The safe harbor has the same structural limits it had under SB 24-205. It covers insurance practices regulated by the DOI. It does not cover:
- Insurer employment decisions: AI used in hiring, compensation, or workforce management at insurance carriers falls under SB 26-189 without safe harbor protection, even though the insurer itself is inside the safe harbor for insurance activities.
- Affiliated non-insurer entities: MGAs, TPAs, data analytics subsidiaries, and captive service companies that deploy AI in connection with insurance activities but are not themselves licensed insurers remain independent deployers under SB 26-189.
- AI use cases outside DOI rulemaking: Fraud triage, claims workflow automation, marketing segmentation, and agent productivity tools that fall outside the scope of Reg 10-1-1 sit outside the safe harbor regardless of the carrier's compliance posture for core underwriting and pricing.
For actuaries and compliance officers, the safe harbor analysis under SB 26-189 is identical to the analysis under SB 24-205. The relevant question remains: does the DOI have a rule governing this specific AI use case? If yes, the safe harbor applies. If no, SB 26-189's ADMT requirements kick in, but those requirements are now disclosure and notification obligations rather than governance and audit obligations.
Comparing Colorado's Pivot to the NAIC and EU Frameworks
Colorado's shift from audits to disclosure positions it at a distinct point on the AI regulation spectrum. Three reference frameworks help contextualize where SB 26-189 lands.
| Framework | Accountability Model | Insurance-Specific Provisions | Status |
|---|---|---|---|
| Colorado SB 26-189 | Reactive: disclose after adverse outcomes; no mandatory pre-deployment audits | Safe harbor for DOI-regulated insurers; DOI bias testing rules remain independent | Passed May 9, 2026; effective January 1, 2027 |
| NAIC Model Bulletin + Evaluation Tool | Guidance-based governance expectations; 12-state evaluation pilot running March-September 2026 | Adopted by 23 states plus DC; insurance-specific; model cards and AI inventory documentation | Pilot through September 2026; possible model law transition at Fall National Meeting |
| EU AI Act | Preventive: mandatory conformity assessments, risk management systems, and bias testing before deployment of high-risk AI | Life and health insurance AI classified as high-risk under Annex III; EIOPA mandates two-step impact assessments | High-risk provisions applicable August 2, 2026 |
| Texas TRAIGA | Transparency-focused with prohibited use categories; no mandatory audits | Covers AI in insurance through broad deployer scope | Signed June 2025; effective January 1, 2026 |
| Connecticut SB 5 | Developer-deployer responsibility allocation; employment-focused | Narrower scope than Colorado; primarily employment AI decisions | Passed May 2026; key provisions effective October 1, 2026 |
The NAIC trajectory is particularly relevant. The NAIC's own deliberations about whether to convert its non-binding Model Bulletin into an enforceable model law parallel Colorado's struggle with the right level of regulatory prescription. The 12-state evaluation tool pilot running through September 2026 across California, Colorado, Connecticut, Florida, Iowa, Louisiana, Maryland, Pennsylvania, Rhode Island, Vermont, Virginia, and Wisconsin represents the NAIC's attempt to build an evidence base before prescribing governance requirements. Colorado's retreat from mandatory audits may influence the NAIC's trajectory, particularly if other states interpret the rewrite as a signal that the audit model is premature for AI systems that are evolving faster than audit methodologies can keep pace.
The EU AI Act sits at the other end of the spectrum. Its mandatory conformity assessment requirements for high-risk AI systems in insurance (applicable August 2, 2026) represent exactly the preventive model that Colorado just abandoned. Multinational carriers operating in both jurisdictions face the ironic situation of building EU-required pre-deployment audit infrastructure that Colorado no longer requires under its general statute, while maintaining Colorado DOI-specific bias testing that the EU framework may or may not credit as equivalent.
What the Shift From Audits to Disclosure Means for Actuaries
The philosophical pivot in SB 26-189 has direct implications for actuarial work. Under SB 24-205, actuaries were positioned as the natural owners of bias audit methodology, statistical testing design, and impact assessment quantification. Patterns we have seen in other states show that actuarial teams were staffing up to build internal audit capabilities for algorithmic discrimination testing: four-fifths rule analyses, proxy variable audits, intersectional testing, and counterfactual analysis. Colorado was the forcing function for that investment.
Under SB 26-189, the general statute no longer requires those analyses. The actuarial work shifts toward:
- Post-decision documentation: When an adverse outcome triggers the 30-day disclosure obligation, someone needs to produce the plain-language explanation of ADMT's role and the data types involved. Actuaries who understand the model's mechanics are best positioned to write explanations that are both accurate and comprehensible.
- ADMT inventory and classification: Determining which AI systems "materially influence" a "consequential decision" requires precisely the kind of model impact analysis that actuaries already perform for rate filing support. The Airia governance analysis recommends maintaining a live inventory of every deployed AI model organized by application domain, risk classification tier, and deployment status.
- Human review workflow design: SB 26-189 requires "meaningful human review and reconsideration, to the extent commercially reasonable." The qualifier "commercially reasonable" will be tested through enforcement actions and litigation. Actuaries can contribute to defining what constitutes meaningful review for algorithmic underwriting and claims decisions, including documentation that human reviewers examined primary evidence and did not simply default to system recommendations.
- Vendor documentation alignment: Matching the developer's documented intended use against the carrier's actual deployment use cases is an exercise in model validation. Discrepancies create liability exposure: deployers using ADMT outside documented scope bear full liability for discriminatory outcomes, while deployers operating within documented scope share liability with the developer.
Importantly, none of this eliminates the actuarial bias testing work that Reg 10-1-1 requires for insurance-specific AI. That work continues on its own track. The shift under SB 26-189 means that the general-statute overlay is thinner, not that the insurance-specific obligations have changed.
The 30-Day Adverse Outcome Clock: Operational Challenges
The 30-day disclosure window in SB 26-189 creates operational demands that deserve careful attention from carrier compliance teams. When a covered ADMT materially influences a consequential decision resulting in an adverse outcome, the deployer must provide the affected consumer with:
- A plain-language description of the ADMT's role in the decision
- The types and sources of data used
- A description of the consumer's rights, including access to human review
The clock starts when the decision is made, not when the carrier discovers the adverse outcome. For insurance, adverse outcomes in underwriting (declination, substandard rating, exclusions), claims (partial denial, delayed payment), and coverage (non-renewal, cancellation) each present different identification challenges. A declination letter sent by an automated underwriting system starts the 30-day clock at the time of issuance. A claims adjudication that relies on AI triage scoring starts the clock at the time of the adjudication decision, even if the consumer does not receive the adverse determination for several days.
Carriers with legacy claims systems that batch-process adverse determinations face a particular compression problem. If the system generates 500 adverse claim decisions on a Friday afternoon, the 30-day clock starts for all 500 simultaneously. The disclosure system must be able to produce individualized explanations that reference the specific ADMT version active at the time of each decision, linked to the data inputs for that particular consumer.
AG Rulemaking and the January 2027 Implementation Window
SB 26-189 requires the Colorado Attorney General to complete mandatory rulemaking by January 1, 2027, the same date the law takes effect. The rulemaking will fill in critical implementation details that the statute leaves open, including the specific form and content requirements for consumer notices, the standards for "meaningful human review," and the procedures for AG enforcement.
The 60-day cure period (available through January 1, 2030) provides a limited enforcement buffer. For the first three years, the AG must provide 60 days' notice and an opportunity to cure before bringing an enforcement action, except for knowing or repeated violations. After 2030, that buffer disappears. Violations of SB 26-189's developer or deployer requirements are classified as deceptive trade practices under the Colorado Consumer Protection Act, carrying the standard civil penalties available under that statute.
Carriers should not treat the cure period as an extended runway. The Fisher Phillips analysis emphasizes that the cure period applies to the AG's enforcement action, not to the underlying violation. Consumer complaints, regulatory examinations, and DOI coordination can all surface non-compliance before the AG acts. The reputational and examination risk materializes well before any formal enforcement proceeding.
Whether Other States Will Follow Colorado's Pivot
Colorado's retreat from mandatory audits is being watched across every state capitol with pending AI legislation. The signaling effect is substantial: if the first state to pass a comprehensive AI bias law concluded within two years that mandatory audits were unworkable, other legislatures may skip the audit phase entirely.
Texas signed the Responsible Artificial Intelligence Governance Act (TRAIGA) in June 2025, which took a transparency-focused approach from the start without ever requiring bias audits. Connecticut's SB 5, passed the same week as Colorado's SB 26-189, adopted a developer-deployer framework for employment AI decisions without mandatory pre-deployment audits. Illinois amended its Human Rights Act effective January 1, 2026, to prohibit AI-driven employment discrimination, but relied on existing anti-discrimination enforcement mechanisms rather than creating a standalone audit requirement.
The emerging consensus at the state level appears to be settling on transparency, disclosure, and existing anti-discrimination law enforcement rather than purpose-built algorithmic audit regimes. For insurers, this means the NAIC framework and state insurance department rulemaking (like Colorado's Reg 10-1-1) will likely remain the primary source of prescriptive bias testing requirements, while general AI statutes provide an overlay of consumer notice and explanation rights.
The NAIC's proposed four-tier AI risk taxonomy from the Spring 2026 National Meeting signals where prescriptive governance requirements may eventually land at the insurance-specific level. But that framework is still in exposure draft. For now, the NAIC Model Bulletin remains guidance-based, and individual state insurance departments like Colorado's DOI are the source of enforceable, insurance-specific AI governance requirements.
What Compliance Officers Should Do Before January 2027
The Airia compliance roadmap breaks the preparation into three priority tiers:
Priority 1 (Foundation, Q3-Q4 2026): Audit all current AI deployments to identify which systems qualify as covered ADMT. Document vendor-provided intended use statements for each system. Map actual deployment use cases against documented scope and flag misalignments for contract renegotiation. Review all AI vendor agreements for indemnification clauses that SB 26-189 renders void.
Priority 2 (Infrastructure, Q4 2026-Q1 2027): Implement decision tracking that links specific ADMT versions to individual consequential decisions. Build adverse outcome flagging mechanisms tied to real-time decision data. Create automated disclosure generation templates using version-specific documentation. Establish human review workflow tools with override logging that demonstrates reviewers did not default to system output.
Priority 3 (Operational, Before January 1, 2027): Train human reviewers on domain-specific review standards. Test the 30-day disclosure timeline end-to-end. Validate that recordkeeping systems capture three-year audit trails. Document internal governance procedures, even though SB 26-189 does not mandate a standalone governance program; these will remain relevant for the parallel DOI compliance track.
For carriers already building Reg 10-1-1 compliance programs, much of this infrastructure overlaps with existing work. The marginal effort is in the consumer-facing disclosure layer (30-day adverse outcome notifications, human review request processing, data correction workflows) rather than the internal governance and testing layer that Reg 10-1-1 already covers.
The Bigger Picture: A Weaker or Different Accountability Model?
Whether SB 26-189 represents weaker accountability or simply a different theory of accountability depends on which failure mode you are most concerned about.
The audit model assumed that the right intervention point was before deployment. Test for bias, assess for risk, fix the problems, then go live. The weakness of this model, as Colorado's experience demonstrated, is that audit methodology for rapidly evolving AI systems is immature, audit costs are substantial and poorly understood, and prescriptive audit requirements become outdated before the next legislative session.
The disclosure model assumes that the right intervention point is after an adverse outcome occurs. Tell the consumer what happened, explain the technology's role, give them access to human review, and let them correct bad data. The weakness of this model is that disclosure comes after the harm has already occurred. A consumer who was denied insurance coverage based on an ADMT recommendation learns about the technology's involvement only after the denial, at which point the harm (delayed coverage, missed enrollment periods, alternative market pricing) may already be irreversible.
From tracking the actuarial profession's engagement with algorithmic fairness standards over the past three years, the most productive framework is probably one that combines elements of both. Pre-deployment testing and governance (what Reg 10-1-1 already requires for insurance) addresses the systematic, population-level bias risks. Post-deployment disclosure and human review (what SB 26-189 now requires for all covered ADMT) addresses the individual consumer harm that slips through population-level testing. Colorado's dual-track structure, where the insurance regulator retains the audit model while the general statute moves to disclosure, may be an accidental version of this combined approach.
The question for other states is whether they will preserve their own insurance-specific audit tracks as they adopt Colorado-style general ADMT transparency laws, or whether the general statute's pivot to disclosure will pull the insurance-specific requirements in the same direction. The answer will shape the actuarial compliance landscape for years.
Sources
- Colorado General Assembly, SB 26-189: Automated Decision-Making Technology (passed May 9, 2026)
- Colorado General Assembly, SB 24-205: Consumer Protections for Artificial Intelligence (enacted May 2024)
- Consumer Finance Monitor, "Colorado Rewrites Its Landmark AI Law: Unpacking SB 26-189" (May 12, 2026)
- Fisher Phillips, "Colorado Moves to Replace AI Bias Audit Law With New Transparency Framework" (May 2026)
- Airia, "Colorado Rewrote Its AI Law. Here's What Governance Practitioners Need to Do Before January 2027" (May 2026)
- Colorado Newsline, "New Bill Would Narrow Scope of Colorado's Landmark 2024 AI Law" (May 4, 2026)
- Colorado Sun, "Colorado's Fierce Two-Year Fight Over AI Regulation Ends With Watered-Down Law" (May 12, 2026)
- Baker Botts, "Colorado Repeals and Replaces AI Act" (May 2026)
- Reed Smith, "SB 26-189: Colorado Legislature Kicks Off CAIA Rewrite Race" (May 2026)
- Faegre Drinker, "Colorado Division of Insurance Expands AI-Related Governance and Risk Management Obligations for Insurers" (September 2025)
- US Department of Justice, "Justice Department Intervenes in xAI Lawsuit Challenging Colorado's 'Algorithmic Discrimination' Law" (April 2026)
- NAIC, Artificial Intelligence Insurance Topics Page
- Colorado Division of Insurance, SB 21-169 External Data and Algorithms Page
- Debevoise, "Colorado Approves Extension of AI Regulation to Health and Auto Insurers" (September 2025)
- CPR News, "Polis Says He Will Sign Pared Down AI Bill That Passed Overnight" (May 12, 2026)
Further Reading
- Colorado AI Act: 73 Days Until the June 30 Insurance Deadline: Our April 2026 analysis of the original SB 24-205 compliance framework, now superseded by SB 26-189 for the general statute but still relevant for understanding the DOI safe harbor mechanics.
- Colorado Insurance Bias Audits: The July 1 Deadline Stands: Why the insurance-specific Regulation 10-1-1 bias audit requirements remain fully in force regardless of SB 26-189, with the four-part testing methodology and compliance roadmap.
- NAIC Weighs Jump From AI Bulletin to Enforceable Model Law: The 33 RFI comment letters and fault lines around scope, vendor liability, and company-size thresholds shaping the national AI regulatory trajectory for insurers.
- NAIC AI Evaluation Pilot Launches Amid Industry Pushback: The 12-state pilot framework and how Colorado's legislative pivot may influence the NAIC's decision on whether to codify evaluation requirements.
- The AI Governance Gap in Actuarial Practice: Where ASOP No. 56 meets AI systems and where actuarial standards fall short of statutory requirements.