From reviewing the compliance checklists that Colorado auto and health benefit plan insurers must file by July 1, 2026, a pattern has become clear: most carriers have the bias testing technology in place but lack the documentation rigor that Regulation 10-1-1 demands. The federal headlines have been dominated by xAI's constitutional challenge to Colorado's broader AI Act (SB 24-205), which secured a temporary restraining order on April 27 blocking enforcement of the statute. But that litigation does not reach the insurance-specific layer. Colorado SB 21-169 and the Division of Insurance's implementing Regulation 10-1-1 operate on a separate regulatory track, enforced by the DOI rather than the Attorney General, and the July 1 annual compliance report deadline for auto and health insurers remains fully operative. With 50 days left, actuaries building or validating models that touch external consumer data need to understand both what the bias audit requires and which regulatory framework actually governs their work.

The Two-Track Regulatory Structure

Colorado's approach to regulating AI in insurance runs on two separate statutory tracks, and conflating them is the most common compliance error carriers make. The first track is SB 24-205, the Consumer Protections for Artificial Intelligence Act signed by Governor Jared Polis in May 2024. This is the broad, economy-wide statute that applies to any deployer of "high-risk AI systems" making "consequential decisions" in areas including insurance, employment, lending, housing, and healthcare. The second track is SB 21-169, signed in July 2021, which created insurance-specific requirements for the use of External Consumer Data and Information Sources (ECDIS) in underwriting, rating, and claims decisions. The Division of Insurance implemented SB 21-169 through Regulation 10-1-1, which first took effect for life insurers on November 14, 2023, and was extended to private passenger auto and health benefit plan insurers in September 2025.

The critical distinction: SB 24-205 is enforced by the Colorado Attorney General. SB 21-169 and Regulation 10-1-1 are enforced by the Commissioner of Insurance through the Division of Insurance. The xAI lawsuit, the federal TRO, and the pending replacement bill (SB 26-189) all target SB 24-205 exclusively. The insurance-specific requirements under SB 21-169 were never part of the federal litigation, have not been stayed, and continue to operate on their original compliance calendar. Carriers that relaxed their compliance timelines in response to the xAI headlines may be miscalculating their exposure.

SB 24-205 does contain a safe harbor provision at Section 10-3-1104.9 for insurers already complying with DOI rules governing algorithms and predictive models. In our earlier analysis of the June 30 deadline, we mapped where that safe harbor holds and where it breaks for affiliated non-insurer entities and third-party AI vendors. With SB 24-205 now blocked, the safe harbor question is temporarily moot, but the underlying DOI compliance obligations it references are very much alive.

What the xAI Lawsuit Changed, and What It Did Not

On April 9, 2026, xAI (Elon Musk's AI company) filed suit in the U.S. District Court for the District of Colorado, Case No. 1:26-cv-01515, challenging SB 24-205 on six constitutional grounds. The complaint alleged First Amendment violations through compelled speech and content discrimination, arguing the law would force its Grok model to "abandon its disinterested pursuit of truth." It also raised Commerce Clause, Due Process (vagueness), and Equal Protection claims, contending that the statute's exemption for discrimination intended to "increase diversity or redress historical discrimination" itself constitutes compelled discrimination.

On April 24, the Department of Justice intervened in the case, marking the first time the federal government sought to invalidate a state AI law under Executive Order 14365. The DOJ's filing argued that the law would "require AI companies to infect their products with woke DEI ideology" and that "America's success in the AI race will depend on removing barriers to innovation and adoption across sectors." Colorado Attorney General Phil Weiser agreed to an enforcement stay, and on April 27, Magistrate Judge Cyrus Y. Chung issued a TRO blocking enforcement "until 14 days after the date the Court issues a ruling on xAI's forthcoming motion for a preliminary injunction."

For insurance carriers, the operative question is scope. The TRO applies to SB 24-205, the broad AI Act. It does not apply to SB 21-169 or Regulation 10-1-1. The DOI's authority to require bias testing documentation, ECDIS governance programs, and annual compliance reports derives from separate statutory authority. No party has challenged SB 21-169 in court, and the DOI has not indicated any intention to delay its July 1 compliance report deadline. Carriers who treat the TRO as a blanket reprieve across all Colorado AI requirements are misreading the regulatory landscape.

The Four-Part Bias Audit Methodology

Regulation 10-1-1 requires insurers to conduct bias testing across protected classes when using ECDIS in underwriting, pricing, or claims decisions. From tracking the DOI's evolving guidance and the compliance roadmaps published by firms like Swept AI, the testing framework has crystallized into four distinct methodologies that insurers must apply and document.

1. Disparate Impact Analysis (Four-Fifths Rule)

The four-fifths rule, borrowed from employment discrimination law and adapted for insurance, evaluates whether the selection rate for any protected class falls below 80% of the rate for the most favored class. In insurance applications, "selection rate" translates to approval rates, pricing tier assignments, claims acceptance decisions, and any other model output that affects policyholders. If Black applicants receive preferred pricing tiers at 60% of the rate White applicants do, the model produces disparate impact under this standard. The four-fifths threshold is a screening tool rather than a dispositive legal test, but the regulation requires insurers to document where their models fall relative to this benchmark and to provide actuarial justification for any disparities that exceed it.

2. Proxy Variable Auditing

The second methodology targets features that serve as statistical proxies for protected attributes. ZIP code correlates with race; occupation correlates with gender; credit score correlates with income and national origin. Proxy variable auditing requires insurers to identify which model inputs carry proxy load, measure the degree to which removing or constraining those variables changes outcomes across protected classes, and document whether any retained proxy variables have actuarial justification. This is where traditional actuarial rating variables face the most scrutiny. Insurers cannot simply demonstrate that a variable is predictive of loss; they must also demonstrate that its predictive value does not flow primarily through its correlation with a protected class.

3. Intersectional Testing

Single-axis testing, evaluating outcomes for women as a group or Black applicants as a group, misses discriminatory patterns that emerge at intersections. A model may treat women fairly on average and Black applicants fairly on average while producing adverse outcomes for Black women specifically. Intersectional testing requires carriers to disaggregate results across combinations of protected classes and evaluate whether any intersection shows disparate impact that single-axis analysis would miss. The combinatorial complexity increases significantly with nine protected classes, but the regulation does not specify which intersections must be tested, leaving insurers to document their methodology for selecting intersectional categories.

4. Counterfactual Analysis

The most direct form of bias evidence comes from counterfactual evaluation. For each decision the model makes, the protected attribute is changed while holding all other inputs constant, and the analysis measures whether the outcome changes. If changing an applicant's imputed race from Black to White, with everything else identical, shifts the pricing tier or approval decision, the model exhibits direct sensitivity to protected class membership. Counterfactual analysis produces the most granular evidence of algorithmic discrimination but requires careful implementation: the "counterfactual" version of each data point must be constructed in a way that isolates the protected attribute from its statistical correlates.

MethodologyWhat It TestsKey ThresholdPrimary Challenge
Four-Fifths RuleSelection rate ratios across protected classes80% of most-favored group rateDefining "selection" for insurance pricing tiers
Proxy Variable AuditFeature correlation with protected attributesNo fixed threshold; actuarial justification requiredDisentangling predictive value from proxy load
Intersectional TestingCompound group outcomes (e.g., Black women, elderly Hispanic applicants)Same as four-fifths rule applied to intersectionsCombinatorial explosion across nine protected classes
Counterfactual AnalysisOutcome sensitivity to protected attribute changesStatistical significance of outcome changeConstructing valid counterfactual data points

Protected Classes and Insurance Lines Covered

SB 21-169 covers nine protected classes: race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, and gender expression. This is narrower than SB 24-205's thirteen-plus categories (which add age, genetic information, limited English proficiency, reproductive health, and veteran status), but for insurance-specific compliance, the SB 21-169 list is what matters for the July 1 deadline.

The regulation applies to three lines of insurance with different implementation timelines. Life insurers have been subject to Regulation 10-1-1 since November 14, 2023, and have already submitted at least one annual compliance report cycle. Private passenger auto insurers and health benefit plan insurers were brought under the regulation when the amended version took effect on October 15, 2025. These newer lines submitted an interim compliance progress report by December 1, 2025, and face their first full annual compliance report on July 1, 2026.

Each line carries specific data considerations. Auto insurers must account for telematics data, including vehicle location, speed, and engine performance metrics, as ECDIS subject to bias testing. The DOI has flagged telematics location data in particular, since driving patterns correlate with residential geography, which in turn correlates with race and income. Health benefit plan insurers must test ECDIS used in care authorization and claims decisions, though medical records are explicitly excluded from the ECDIS definition. Insurers are still responsible for ensuring that providers making decisions on the insurer's behalf are following ECDIS governance requirements when those decisions involve data covered by the regulation.

What Goes Into the July 1, 2026 Annual Compliance Report

The annual compliance report is not a simple attestation. Based on the Faegre Drinker analysis and the Swept AI compliance roadmap, the filing requires seven categories of documentation:

  1. Responsible personnel: Title and qualifications of each individual responsible for compliance, with specific requirements they oversee identified.
  2. Algorithmic impact assessments: Documented assessments for every AI system using ECDIS, including purpose, data inputs, decision scope, risk classification, deployment date, and responsible owner.
  3. Bias testing results: Output from all four testing methodologies (or documented justification for any methodology not applied) across all nine protected classes, with methodology documentation sufficient for the DOI to evaluate the testing approach.
  4. Consumer notification records: Documentation of how and when consumers were notified that ECDIS-informed decisions affected their coverage, pricing, or claims outcomes, plus records of any consumer data correction requests and how they were resolved.
  5. AI system inventory: Complete catalog of every model or algorithm using ECDIS, including version control documentation showing when models were updated and what changed.
  6. Remediation documentation: For any testing that revealed disparate impact, documentation of what remediation was taken, when, and the post-remediation testing results confirming the disparate impact was addressed.
  7. Governance structure: Board-level or senior management oversight documentation, cross-functional governance group membership, documented policies and procedures, complaint protocols, risk assessment rubrics, and annual governance review records.

The documentation burden is where most carriers encounter friction. From patterns observed in early life insurer compliance cycles, the bias testing itself runs in hours or days with modern ML fairness toolkits. The documentation assembly, connecting test results to governance sign-offs, remediation timelines, consumer notification logs, and system inventory records, routinely takes eight to twelve weeks when starting from an unstructured baseline. Auto and health insurers submitting their first full report have had roughly eight months since the amended regulation took effect, but those who treated the December 2025 interim report as a formality rather than a documentation dry run may find the remaining 50 days tight.

BIFSG: Colorado's Proposed Quantitative Testing Approach

One of the ways Colorado's framework goes beyond the NAIC Model Bulletin is in its proposed use of Bayesian Improved First Name Surname Geocoding (BIFSG) as the methodology for estimating race and ethnicity in bias testing. The RAND Corporation developed BIFSG as a statistical technique that uses applicants' first names, surnames, and geocoded residential addresses to impute probabilistic race and ethnicity categories. The DOI proposed BIFSG testing in draft regulations released in September 2023 as a way to evaluate whether Hispanic, Black, and Asian/Pacific Islander applicants are declined at statistically different rates or charged statistically different premiums relative to White applicants.

BIFSG is not new to regulated industries; the Consumer Financial Protection Bureau has used a similar methodology (BISG, without first names) in mortgage lending fair lending examinations since 2014. But its application to insurance underwriting introduces complications that actuaries should understand. BIFSG accuracy degrades in areas with low ethnic diversity, since the geocoding component relies on Census block group composition. It also performs less reliably for multiracial individuals, applicants with names common across multiple ethnic groups, and populations in rapidly changing neighborhoods where Census data lags actual demographic shifts. The DOI has not finalized whether BIFSG will be the required methodology or one of several acceptable approaches, but insurers preparing for July 1 should be testing with it as a baseline, since any alternative methodology will need to demonstrate at least comparable statistical power.

Colorado vs. the NAIC AI Model Bulletin

The NAIC adopted its Model Bulletin on the Use of AI Systems by Insurers in December 2023. As of March 2025, 24 states have adopted it with minimal material changes, making it the broadest AI governance framework in insurance regulation. Colorado's SB 21-169 framework predates the NAIC bulletin and in several respects goes further. Understanding where they overlap and where they diverge matters for carriers operating across multiple jurisdictions.

Both frameworks require AI governance programs, model inventories, vendor oversight protocols, and incident management documentation. The NAIC's 12-state AI Evaluation Tool pilot, launched in 2026, adds a structured examination template that regulators use to assess insurer AI programs. A carrier with a serious SB 21-169 compliance program already covers the majority of the NAIC bulletin's requirements, and the overlap is substantial enough that compliance with one significantly reduces the incremental work for the other.

The divergences, however, are material. Colorado adds three elements the NAIC bulletin does not require: quantitative bias testing using a specified methodology (BIFSG), formal consumer notification and data correction rights, and documented algorithmic impact assessments with remediation evidence. The NAIC bulletin is guidance-based, while Colorado's framework carries regulatory enforcement authority. The NAIC's transition toward enforceable model law may eventually close this gap, but for now, carriers meeting the NAIC standard alone fall short of Colorado's documentation and testing requirements.

RequirementColorado SB 21-169 / Reg 10-1-1NAIC AI Model Bulletin
AI governance programRequiredRequired
Model inventoryRequired with version controlRequired
Vendor oversightRequiredRequired
Quantitative bias testingRequired (four-part methodology)Not specified
BIFSG demographic estimationProposed as default methodologyNot addressed
Consumer notificationRequired with data correction rightsNot specified
Impact assessmentsRequired with remediation documentationNot specified
Annual compliance reportRequired (filed with DOI)Examination-based (AI Evaluation Tool)
Enforcement mechanismRegulatory (Commissioner of Insurance)Guidance (state adoption varies)

SB 26-189: The Replacement Bill and Its Limits

On May 7 and May 9, 2026, the Colorado Senate and House passed SB 26-189, sponsored by Senator Robert Rodriguez (the same legislator who championed SB 24-205 in 2024). The bill awaits Governor Polis's expected signature. SB 26-189 replaces SB 24-205's terminology of "high-risk artificial intelligence systems" with "automated decision-making technology" (ADMT) and eliminates mandatory bias audits and risk impact assessments in favor of a streamlined transparency-and-notice framework. The new bill would take effect January 1, 2027.

Under SB 26-189, deployers would need to provide clear notice at the point of interaction when covered ADMT is used, deliver a plain-language explanation within 30 days if the technology produces an adverse outcome, honor consumer rights to request personal data used in decisions and correct inaccuracies, and retain records for three years. The AG retains sole enforcement authority through the Colorado Consumer Protection Act, with a 60-day notice and cure period for first violations. The bill explicitly carves out routine scheduling, customer service triage, advertising, marketing, and content moderation.

For insurance carriers, the essential point is that SB 26-189 replaces SB 24-205, not SB 21-169. The insurance-specific ECDIS governance requirements, the four-part bias testing methodology, and the July 1 annual compliance report obligation all derive from SB 21-169 and Regulation 10-1-1, which SB 26-189 does not amend. Even after SB 26-189 takes effect in January 2027, insurers will still face two layers of Colorado AI regulation: the transparency requirements of SB 26-189 for ADMT generally, and the more rigorous bias audit and documentation requirements of SB 21-169 for ECDIS-driven decisions specifically.

The 50-Day Compliance Roadmap for Actuarial Teams

For auto and health benefit plan insurers filing their first full annual compliance report on July 1, the remaining 50 days require focused execution. Based on patterns from the life insurer compliance cycle, here is a practical timeline:

Weeks 1-2 (May 12 to May 25): Complete the AI system inventory. Every model or algorithm that touches ECDIS needs to be cataloged with its purpose, data inputs, decision scope, deployment date, version history, and designated owner. If the December 2025 interim report included a preliminary inventory, update it for any models deployed, modified, or retired since then.

Weeks 3-5 (May 26 to June 15): Execute bias testing. Run the four-part methodology across all nine protected classes for each ECDIS-using system. Where BIFSG is used for demographic estimation, document the methodology, its limitations, and how results were interpreted. Flag any model output where the four-fifths rule threshold is breached. For each flagged result, document whether actuarial justification exists or whether remediation is needed.

Weeks 6-7 (June 16 to June 29): Assemble documentation. Connect bias testing results to governance sign-offs, compile consumer notification logs, document any remediation actions taken and their post-remediation test results, and prepare the compliance report narrative. Ensure the board-level or senior management oversight documentation is current and that the cross-functional governance group membership list reflects actual participants.

Week 8 (June 30 to July 1): Final review and filing. Have legal and compliance teams review the assembled package for completeness against the Regulation 10-1-1 checklist. File with the DOI by the July 1 deadline.

Why This Matters for Actuaries

Colorado's insurance bias audit framework represents the most prescriptive set of fairness testing requirements any U.S. state has imposed on actuarial models. Every pricing GLM, every underwriting score, every claims triage algorithm that consumes external consumer data now sits inside a compliance regime that demands not just predictive accuracy but documented fairness across protected classes.

For pricing actuaries, the proxy variable audit component has the most direct impact. Traditional rating variables like territory, credit score, and occupation have long been defended on the basis of actuarial predictive power. Regulation 10-1-1 does not prohibit their use, but it does require carriers to document the extent to which each variable's predictive value flows through correlation with protected attributes and to provide actuarial justification for retaining variables with significant proxy load. This shifts the documentation burden from "is this variable predictive?" to "is this variable predictive for reasons that are separable from protected class membership?"

For reserving and model validation actuaries, the intersection of ASOP No. 56 with state-level bias audit requirements creates a dual compliance surface. Model risk management under ASOP No. 56 focuses on fitness for purpose, data quality, and appropriate limitations documentation. Colorado's framework adds a fairness dimension that ASOP No. 56 does not address. Actuaries signing off on model validation reports for Colorado-deployed systems need to understand that the ASOP-compliant validation and the Regulation 10-1-1 bias audit are complementary, not substitutes.

Colorado is the first mover, but it will not be the last. The NAIC's progression from bulletin to model law, the EU AI Act's enforcement beginning August 2026, and the 24 states that have already adopted the NAIC bulletin all point in the same direction: bias testing documentation is becoming a standard compliance obligation for actuarial models that consume external data. Building the documentation infrastructure now, even in states without Colorado-level requirements, is a hedge against regulatory convergence.

Further Reading

Sources