From tracking how each top-20 carrier responds to the NAIC's AI governance framework, Hartford's voluntary Algorithmic Impact Assessment stands out as the first concrete commitment to bias transparency in production models. The February 2026 disclosure arrived nine months before the NAIC AI Systems Evaluation Tool pilot is scheduled to produce its final recommendations and more than a year before Colorado's SB 205 enforcement deadline. It sets an operational benchmark that will shape how regulators evaluate every other carrier's AI governance maturity.

Hartford's Q1 2026 earnings call on April 23 confirmed the organizational commitment behind the AIA: CEO Christopher Swift described a "multiyear journey" with an "AI-first mindset" that has already trained and licensed over 6,000 employees on Microsoft Copilot and Google Notebook for daily workflow enhancement. The company has migrated to the cloud, organized its data infrastructure, and now processes over 75% of small business quotes digitally with no human touch. The AIA is the governance layer on top of this operational transformation, and it addresses the question that regulators and litigants increasingly ask: how do you know your models are not discriminating?

What the Hartford AIA Contains

Hartford published its first Algorithmic Impact Assessment in February 2026. The NAIC referenced the document at the Spring 2026 National Meeting in San Diego (March 22-25) as a benchmark for responsible AI disclosure by a commercial carrier. While the full document is not publicly filed as a regulatory exhibit, Hartford's investor communications and regulatory presentations describe its core components.

Bias audit scope. The AIA covers three demographic dimensions that have historically driven fair lending and fair insurance scrutiny: ZIP code (a well-documented proxy for race and income), policyholder age (relevant for both age discrimination concerns and actuarial accuracy), and property type (where rural versus urban classification can correlate with protected class membership). For each dimension, the assessment documents the statistical methodology used to test for disparate impact, the threshold at which a model triggers remediation, and the escalation path when a threshold is breached.

Model validation procedures. Hartford's assessment describes a three-tier validation framework. First-line validation sits within the business unit that owns the model, typically the pricing or underwriting team. Second-line validation is conducted by an independent model risk management function reporting to the Chief Risk Officer. Third-line validation involves internal audit, which tests whether the first two lines are following documented procedures. This three-lines-of-defense structure aligns with the NAIC Model Bulletin's recommendation that insurers maintain governance "commensurate with an assessment of the risk" and mirrors the framework that banking regulators have required since OCC Bulletin 2011-12 (SR 11-7).

Human escalation protocols. The AIA specifies that any model output flagged during bias testing must be reviewed by a credentialed actuary or a senior underwriter before it can influence a policyholder-facing decision. This is not a blanket human-in-the-loop requirement for every transaction; rather, it creates a structured exception path for cases where automated systems produce results that fall outside documented tolerance bands. The distinction matters because the NAIC's Spring 2026 agentic AI discussion highlighted the impracticality of requiring human review for every AI output while acknowledging the necessity of human oversight for high-risk decisions.

Vendor audit coverage. Hartford's AIA extends governance requirements to third-party AI vendors. The company's deployment of mea Platform's generative AI solution for underwriting document processing, announced in mid-2025, is subject to the same bias testing and validation procedures as internally developed models. This vendor coverage directly addresses a gap that the NAIC's Third-Party Data and Models Working Group identified at the Spring 2026 meeting, where discussions centered on whether a proposed vendor registry should be mandatory or voluntary and whether guidance should eventually become model law.

How the AIA Maps to the NAIC AI Systems Evaluation Tool

The NAIC's Big Data and Artificial Intelligence Working Group launched its AI Systems Evaluation Tool pilot on March 2, 2026, with 12 participating states: California, Colorado, Connecticut, Florida, Iowa, Louisiana, Maryland, Pennsylvania, Rhode Island, Vermont, Virginia, and Wisconsin. The pilot runs through September 2026, with tool refinement expected in September through October and formal adoption targeted at the NAIC fall meeting in November 2026.

The Evaluation Tool is built around four exhibits that collectively define what regulators expect to see in a carrier's AI governance documentation:

Exhibit Purpose Hartford AIA Alignment
Exhibit A Quantify AI usage across insurance operations Hartford's enterprise AI inventory covers claims, underwriting, operations, and contact center functions
Exhibit B Governance risk assessment framework Three-lines-of-defense validation, CRO-level risk management reporting
Exhibit C Details on high-risk AI systems AIA's bias audit scope (ZIP code, age, property type) directly addresses high-risk use cases in pricing and underwriting
Exhibit D AI data specifics, including reasonable accommodations Data source documentation and vendor audit coverage for third-party inputs

The structural overlap is significant. A carrier that has already built AIA-level documentation can respond to an Evaluation Tool data request with minimal incremental effort. A carrier that has not will face a compressed timeline to produce documentation that demonstrates governance maturity.

At the Spring 2026 meeting, NAIC staff presented a four-tier risk taxonomy for AI systems: unacceptable risk (methods like subliminal manipulation), high risk (systems with potential for significant consumer harm), medium risk (chatbots requiring user transparency), and low risk (systems deployable without restrictions). The Evaluation Tool's focus on Exhibit C, which covers high-risk systems, means that pricing models, underwriting algorithms, and claims triage tools will receive the most regulatory scrutiny. Hartford's AIA anticipated this by concentrating its bias audit scope on the same functions.

The State Farm Litigation as a Counterfactual

While Hartford moved voluntarily toward transparency, State Farm has been compelled into a different kind of disclosure through litigation. In October 2025, a class action filed in the Northern District of Illinois alleged that State Farm's claims-processing algorithms disproportionately flagged Black policyholders for heightened scrutiny, causing delays in urgent repairs and benefit payments. A second lawsuit in Alabama alleged that AI-driven practices discriminated against elderly, disabled, and Black homeowners through systematic claim denial and payment delay patterns.

In early 2026, a federal judge denied State Farm's motion to dismiss the Illinois class action, allowing the case to proceed to discovery. The ruling is significant because discovery will likely force State Farm to produce internal documentation of its algorithmic governance practices, or the lack thereof. Attorney Ben Crump, representing the plaintiffs, has publicly called for court-ordered algorithmic auditing as a remedy.

The contrast between Hartford and State Farm illustrates the strategic calculus for every other carrier. Hartford chose to invest in bias documentation before regulators or plaintiffs required it, creating a defensible record. State Farm may now produce similar documentation under compulsion during discovery, where the framing is adversarial rather than cooperative. The litigation exposure is not limited to State Farm; any carrier deploying AI in claims, underwriting, or pricing without documented bias testing faces analogous risk.

Patterns we have seen in prior regulatory cycles suggest that voluntary early adopters of transparency standards receive more favorable treatment when those standards become mandatory. The carriers that filed risk-based capital reports before RBC became binding in the mid-1990s, and the insurers that adopted enterprise risk management frameworks before the NAIC's Own Risk and Solvency Assessment (ORSA) requirement took effect in 2015, both experienced smoother compliance transitions. Hartford appears to be making the same calculation with its AIA.

The Regulatory Arbitrage Risk

Carriers operating without published AI governance documentation face a widening regulatory gap across multiple jurisdictions. The compliance landscape is layered and increasingly interconnected:

NAIC Model Bulletin (adopted December 2023). As of late 2025, 23 states and Washington, D.C., had adopted the bulletin, which requires insurers to maintain a written AIS Program governing the use of AI systems in consumer-facing decisions. The bulletin does not mandate public disclosure of bias audit results, but it establishes the expectation that insurers can produce such documentation upon regulatory request. NAIC surveys found that 88% of responding auto insurers and 92% of responding health insurers reported using, planning to use, or exploring AI and machine learning models in their operations.

New York DFS Circular Letter No. 7 (July 2024). New York went further than the Model Bulletin by requiring insurers to demonstrate that external consumer data and information sources (ECDIS) do not serve as proxies for protected classes resulting in unfair or unlawful discrimination. Insurers must evaluate the extent to which ECDIS are correlated with protected class status, maintain explanatory documentation, allow DFS to review vendor tools, and conduct vendor audits. The Circular Letter applies quantitative proxy assessment requirements where data on protected classes are available or can be reasonably imputed.

Colorado SB 205 (enforcement deadline: June 30, 2026). Colorado's Consumer Protections for Artificial Intelligence Act creates a duty of reasonable care to prevent algorithmic discrimination. Insurers subject to state insurance laws governing AI use have a compliance carve-out, but only if they meet the commissioner's standards for external data governance. Violations carry penalties of up to $20,000 per violation under the Consumer Protection Act, with enforcement authority held exclusively by the Attorney General.

Connecticut SB 2 (stalled but persistent). Although Governor Lamont's veto threat stalled the bill in the 2025 session, the legislation would have required deployers of high-risk AI systems to complete algorithmic impact assessments before deployment and to publish summary statements on their websites. The bill's continued reintroduction signals the direction of regulatory travel in Hartford's home state.

The arbitrage risk is straightforward: a carrier that has not built bias documentation today will face simultaneous compliance demands across multiple states, each with distinct requirements and timelines. Hartford's AIA serves as a template for building that documentation once and adapting it for jurisdiction-specific requirements.

Building a Regulator-Ready AI Governance Package

For actuaries and model risk managers responsible for AI governance, Hartford's AIA and the NAIC Evaluation Tool together define the minimum viable documentation set. Based on the common requirements across these frameworks, a regulator-ready package should contain five components:

1. Enterprise AI inventory. A complete catalog of every AI and machine learning model in production, including model purpose (pricing, underwriting, claims, marketing), input data sources, output type (score, classification, recommendation), and deployment date. This directly maps to Exhibit A of the NAIC Evaluation Tool. The inventory should distinguish between internally developed models and vendor-supplied systems, because the NAIC's Third-Party Data and Models Working Group is moving toward separate governance expectations for external versus internal AI.

2. Bias testing documentation. For each model classified as high-risk (typically any model that influences policyholder-facing decisions on price, coverage, or claims), documentation of the statistical methods used to test for disparate impact across protected classes. This includes the specific variables tested (at minimum, race proxy via ZIP code or census tract, age, sex, and disability status where applicable), the disparity thresholds applied, the frequency of testing (at minimum annually, and after any material model update), and the results of each test cycle. New York's DFS Circular Letter requires that proxy testing use data that are available or "reasonably imputed using statistical methodologies," which means actuaries cannot avoid the requirement by claiming they do not collect protected class data directly.

3. Model validation reports. Separate from bias testing, model validation documents the accuracy, stability, and predictive performance of each AI system. Validation reports should cover backtesting results, out-of-sample performance, sensitivity analysis, drift detection methodology, and the governance process for model retirement or replacement when performance degrades. The three-lines-of-defense structure used by Hartford aligns with both banking regulatory expectations (OCC SR 11-7) and the NAIC's emerging AI governance framework.

4. Vendor audit trail. For any AI system sourced from a third-party vendor, documentation of due diligence conducted before deployment and ongoing monitoring after deployment. This should include the vendor's own model documentation (commonly called "model cards"), contractual provisions for regulatory access to the vendor's methodology, evidence that the vendor's models have been independently tested for bias, and provisions for model replacement if the vendor cannot satisfy governance requirements. The NAIC's proposed vendor registry, if adopted, will formalize these expectations, but carriers should not wait for adoption to begin building the audit trail.

5. Human escalation framework. Documentation of the conditions under which AI outputs are referred to a human decision-maker, the qualifications required of the human reviewer (Hartford's AIA specifies a credentialed actuary or senior underwriter), the time frame for human review, and tracking of escalation frequency and outcomes. The NAIC's Spring 2026 agentic AI discussion, led by PwC, emphasized that "current implementations of AI generally preserve human-in-the-loop controls for higher-risk decisions, particularly in underwriting and claims handling." A documented escalation framework provides evidence that the carrier meets this expectation.

The Disclosure Timeline Is Compressing

The convergence of multiple regulatory and legal pressures means the window for voluntary disclosure is narrowing. Consider the timeline that carriers face over the next 18 months:

Date Event Implication
February 2026 Hartford publishes AIA Sets voluntary disclosure benchmark
March 2026 NAIC AI Evaluation Tool pilot begins (12 states) Participating carriers begin producing governance documentation for regulators
June 30, 2026 Colorado SB 205 enforcement date Carriers writing business in Colorado must demonstrate reasonable care against algorithmic discrimination
September 2026 NAIC Evaluation Tool pilot concludes Pilot results inform the tool's final structure and recommendations for broader adoption
November 2026 NAIC fall meeting: Evaluation Tool adoption expected Tool becomes available for all state regulators to use in market conduct examinations
2027 State Farm discovery proceeds; additional state legislation expected Litigation-driven disclosure and legislative expansion of AI governance requirements

Each milestone narrows the difference between voluntary and compulsory disclosure. After November 2026, when the NAIC Evaluation Tool is expected to be available for general use, any state regulator can request the documentation package described above as part of a market conduct examination. Carriers that have already built the documentation will respond in days. Carriers that have not will face weeks or months of retroactive reconstruction under regulatory scrutiny.

What Hartford's Financial Position Reveals About the Investment

Hartford's Q1 2026 results provide context for the scale of investment required to support an AI governance program at the carrier level. The company reported operating earnings of $612 million (up 5% year over year), total investments of $22.4 billion, and statutory surplus of $18.9 billion. The middle market commercial segment posted a combined ratio of 92.4%, improved 1.8 points from the prior year. These are not the financials of a carrier that built its AI governance program as a defensive measure; they reflect a company investing from a position of strength.

CEO Swift's characterization of the AI strategy as a "multiyear journey" with "allocated investment spend" suggests that Hartford views AI governance not as a compliance cost center but as an operational investment that supports its broader digital transformation. The company's 75% straight-through processing rate for small business quotes, enabled by its cloud migration and data infrastructure investment, generates both underwriting efficiency and the data pipeline necessary for systematic model monitoring. The AIA sits on top of this infrastructure.

The 6,000+ employees trained and licensed on Microsoft Copilot and Google Notebook represent the organizational capacity layer. AI governance requires more than documentation; it requires people who understand what the models do, can identify when outputs look anomalous, and can execute escalation protocols when bias testing surfaces a concern. A carrier that deploys AI broadly without corresponding training creates the conditions for both regulatory violations and litigation exposure.

Why This Matters for Actuaries

Hartford's AIA reshapes the professional landscape for actuaries in three concrete ways.

Bias testing is becoming a core actuarial skill. The AIA's requirement for credentialed actuaries in the human escalation framework signals that carriers expect actuaries, not just data scientists, to own the fairness testing process. Actuaries bring credentialing accountability (ASOP compliance, Code of Professional Conduct), established relationships with state regulators, and domain expertise in the risk characteristics that AI models are attempting to capture. The CAS released its Primer on Artificial Intelligence for Property-Casualty Actuaries in early 2026, and Section 6 on model governance and bias testing directly addresses the skills that an AIA-level program requires.

Model validation is expanding beyond predictive accuracy. Traditional actuarial model validation asks whether a model produces accurate predictions. AI governance adds a second question: does the model produce fair predictions? These are distinct inquiries, and a model can be accurate at the aggregate level while producing systematically biased outcomes for specific demographic subgroups. The NAIC Evaluation Tool's Exhibit C, focusing on high-risk AI systems, will require carriers to demonstrate both accuracy and fairness, meaning actuaries who can design and execute disparity testing will be in high demand.

Voluntary disclosure creates competitive advantage in regulatory relationships. Carriers that proactively publish AI governance documentation build credibility with state regulators that pays dividends across all regulatory interactions, not just AI-specific ones. Rate filing reviews, market conduct examinations, and solvency assessments all benefit from a carrier's reputation for transparency. Hartford's AIA positions the company favorably across these touchpoints in ways that extend well beyond the AI governance context.

The actuarial profession has navigated prior transparency transitions successfully. ASOP No. 56 (Modeling), adopted in 2019, established documentation and disclosure requirements for actuarial models that laid groundwork for the current AI governance expectations. Actuaries who already comply with ASOP 56's requirements for model documentation, assumption disclosure, and reliance documentation have a head start on the bias testing and governance documentation that AI-era regulations demand.

Further Reading

Sources