The regulatory landscape for artificial intelligence in insurance has shifted from theoretical framework to operational reality in 2026. Over half of U.S. states have now adopted the NAIC’s Model Bulletin on AI governance. A federal executive order is actively challenging state authority over AI regulation. Colorado’s landmark AI Act faces both litigation and legislative delays. And the NAIC is piloting its first structured examination tool for assessing insurers’ AI systems.

For actuaries, data scientists, compliance officers, and insurer leadership, the message is clear: AI governance in insurance now requires the same institutional maturity as financial risk management, cybersecurity, or solvency monitoring. Here is where the regulatory framework stands and what it means in practice.

24+
States plus D.C. that have adopted the NAIC Model AI Bulletin
10
Insurers participating in the NAIC AI evaluation tool pilot
Jun 30
Colorado AI Act delayed effective date (was Feb 1, 2026)

The NAIC Model AI Bulletin: From Guidance to Baseline Standard

The foundation of U.S. insurance AI regulation is the NAIC’s Model Bulletin on the Use of Artificial Intelligence Systems by Insurers, adopted in December 2023. While technically a bulletin rather than a model law or regulation, it has rapidly become the de facto compliance baseline through widespread state adoption.

The Model Bulletin requires insurers to develop and maintain a written AI governance program - termed an “AIS Program” - covering the responsible use of AI across the entire insurance lifecycle: underwriting, rating, claims processing, fraud detection, marketing, and customer service. The program must address several core areas.

Governance structure: Insurers must establish a cross-functional oversight framework with representatives from actuarial, data science, underwriting, claims, compliance, and legal functions. Each representative must have clearly defined responsibilities, authority, and decision-making power. Senior management or a board committee must be accountable for the program.

Risk management: The AIS Program must include validation, testing, and retesting protocols to assess AI system outputs. This includes evaluating whether AI systems produce inaccurate, arbitrary, capricious, or unfairly discriminatory outcomes - the key regulatory standards that apply regardless of the tools used to make decisions.

Third-party vendor oversight: This is a critical provision. Insurers remain responsible for AI systems developed or provided by third-party vendors. The bulletin requires contractual protections including audit rights and cooperation with regulatory inquiries. Regulators have signaled they will “look through” vendor relationships during examinations - meaning an insurer cannot delegate compliance responsibility by outsourcing AI to a vendor.

Consumer transparency: Insurers must notify consumers when AI systems are in use and provide appropriate information about how AI may affect decisions impacting them. The level of disclosure may vary by the phase of the insurance lifecycle involved.

Documentation: Comprehensive documentation of AI systems - including development processes, data sources, validation results, and risk mitigation measures - must be maintained and available for regulatory review.

State Adoption: A Growing Majority

The pace of state adoption has been notable. As of early 2026, at least 24 states and the District of Columbia have adopted the NAIC Model Bulletin or substantially similar guidance. The NAIC itself stated in December 2025 that over half of all states have now adopted the bulletin or similar measures. Adopting jurisdictions include Alaska, Arkansas, Connecticut, Delaware, the District of Columbia, Hawaii, Illinois, Iowa, Kentucky, Maryland, Massachusetts, Michigan, Nebraska, Nevada, New Hampshire, New Jersey, North Carolina, Oklahoma, Pennsylvania, Rhode Island, Vermont, Virginia, Washington, West Virginia, and Wisconsin.

This adoption rate is remarkable for an NAIC model product. The speed reflects both the urgency regulators feel about AI risks and the bulletin’s design as a flexible framework that fits within existing state regulatory authority - it doesn’t require new legislation, just a bulletin from the state insurance commissioner.

State-Specific Variations

Colorado has gone further than any other state with its comprehensive AI Act (SB 24-205), which requires developers and deployers of “high-risk” AI systems to use reasonable care to protect consumers from algorithmic discrimination. Originally set to take effect February 1, 2026, the effective date was pushed to June 30, 2026, following intense tech industry lobbying during an August 2025 special legislative session. Colorado’s insurance-specific regulation, which predates the AI Act, separately requires governance and risk management frameworks for insurers using external consumer data, algorithms, and predictive models. Insurers are exempt from the broader Colorado AI Act if they’re already complying with the insurance-specific regulation.

New York has issued circular letters focused on the use of external consumer data in underwriting, with proposed additional guidance on underwriting and pricing fairness. New York’s approach emphasizes supervisory oversight rather than prescriptive rules.

California’s AI Transparency Act takes effect January 1, 2026, requiring transparency disclosures for AI systems, though its scope is broader than insurance-specific regulation.

The Federal Preemption Battle: Trump’s Executive Order

The most significant development in AI regulation entering 2026 is the December 11, 2025, executive order titled “Ensuring a National Policy Framework for Artificial Intelligence.” This order represents the federal government’s most aggressive attempt to centralize AI governance and challenge state-level regulation.

The executive order’s key provisions affecting insurance include creation of a DOJ AI Litigation Task Force (launched January 10, 2026) authorized to challenge state AI laws deemed unconstitutional or unduly burdensome, a directive for the Secretary of Commerce to issue a report within 90 days identifying “onerous” state AI measures, instructions for the FCC to consider preemptive federal AI disclosure standards, and conditional federal funding mechanisms designed to discourage states from adopting AI regulations that conflict with federal policy.

The order explicitly criticizes Colorado’s AI Act, arguing it could compel AI systems to produce inaccurate results to avoid differential treatment of protected groups. This framing - positioning consumer protection as a threat to AI accuracy - has become a central tension in the regulatory debate.

The NAIC’s Response

The NAIC’s response was swift and sharp. In a December 2025 statement, the NAIC expressed “deep concern,” noting that for over 150 years state insurance regulators have overseen a stable, fair, and consumer-focused marketplace.

The NAIC argued the order “could disrupt well-established processes that ensure fairness and transparency in insurance markets” and “introduces legal uncertainty, which may weaken the insurance market by delaying business decisions, deterring investment, and postponing essential consumer protections.”

The McCarran-Ferguson Shield

Legal analysts broadly agree that the executive order alone cannot preempt state insurance regulation. Under the McCarran-Ferguson Act, state laws regulating the business of insurance take precedence over federal directives absent explicit congressional action. Executive orders are not laws and typically cannot independently displace state regulatory authority.

Congress has thus far declined to enact federal AI preemption - a proposed 10-year moratorium on state AI laws was defeated in the Senate in 2025 by a nearly unanimous vote.

The practical impact: Despite limited legal force, the executive order creates meaningful uncertainty. Insurers and their compliance teams face the question of whether to continue building state-specific AI governance programs - investing time and resources - when those requirements could theoretically be challenged or superseded. The prudent approach, as multiple law firms have advised, is to continue complying with existing state AI laws while monitoring federal developments closely.

The NAIC AI Systems Evaluation Tool: Examination Gets Real

Perhaps the most practically significant development for insurers in 2026 is the NAIC’s AI Systems Evaluation Tool - a structured framework that state regulators will use to assess insurer AI governance during market conduct examinations.

The Big Data and Artificial Intelligence (H) Working Group developed the tool through 2025, exposing it for public comment and refining it through multiple drafts. In February 2026, the NAIC published a summary of its pilot project, with 10 insurance companies participating. Iowa Insurance Commissioner Doug Ommen has been leading the effort, emphasizing that the tool is optional for regulators but represents a significant new examination capability.

The evaluation tool is a risk-based framework with questionnaires and checklists covering AI governance structure and board oversight, model development and validation processes, data quality and bias testing, third-party vendor management, consumer impact assessment, and documentation and audit trail adequacy.

Why This Matters for Actuaries

The evaluation tool explicitly includes actuarial functions in its governance expectations. Regulators will be looking at how actuarial teams interact with AI systems used in rating, reserving, and underwriting.

For actuaries who develop or validate predictive models, this means their work product - model documentation, validation reports, assumption justifications - may become subject to regulatory examination under an AI governance framework, not just traditional actuarial standards of practice.

The NAIC’s Casualty Actuarial and Statistical (C) Task Force has also been developing regulatory training on the review of generalized linear models (GLMs), with a February 2026 session titled “For the Love of LLMs” exploring how large language models intersect with actuarial science. The message is clear: regulators are building expertise to evaluate the actuarial tools insurers are using.

Toward an NAIC Model Law?

The NAIC Big Data and AI Working Group issued a Request for Information in 2025 regarding a potential NAIC Model Law on the Use of Artificial Intelligence in the Insurance Industry. This would represent a significant escalation from the current bulletin framework to a codified model law that states could adopt as legislation.

A model law on third-party data and models is anticipated later in 2026, potentially including licensing requirements for AI vendors serving the insurance industry. This would extend regulatory scrutiny beyond insurers themselves to the vendors that provide the data, algorithms, and predictive models underlying insurance decisions.

The move toward a model law reflects a recognition that while the bulletin has been effective at establishing expectations, a model law would provide stronger enforcement mechanisms and greater uniformity across states. However, the federal preemption debate complicates this trajectory - regulators must balance the desire for stronger state-level frameworks with the risk of triggering federal pushback.

What This Means by Line of Business

P&C pricing and underwriting: AI is most deeply embedded in personal lines rating (auto, homeowners) and commercial lines underwriting, where predictive models influence risk selection and premium determination. Regulators have been particularly focused on whether AI pricing models produce disparate impacts on protected classes. The NAIC evaluation tool will likely scrutinize the actuarial justification for rating variables derived from AI systems, the statistical testing for unfair discrimination (disparate impact analysis), documentation of model development and ongoing monitoring, and the governance process for approving AI-derived rating factors.

Claims: AI-driven claims triage, fraud detection, and settlement recommendations are all within scope. The bulletin requires that claims decisions supported by AI comply with unfair claims settlement practice standards. Automated denial or settlement processes are likely to receive intense scrutiny.

Life and health underwriting: Accelerated underwriting programs that use AI to bypass traditional medical examination represent a high-visibility use case. Regulators are watching for both unfair discrimination and accuracy issues in these programs.

Marketing and distribution: AI-powered customer segmentation, lead scoring, and marketing targeting must comply with unfair trade practices standards. This area receives less attention than underwriting and claims but is explicitly within the bulletin’s scope.

Building an AI Governance Program: Practical Steps

For actuaries and compliance professionals tasked with building or enhancing their organization’s AI governance program in 2026, these are the priority areas.

Inventory your AI systems. Many insurers lack a comprehensive inventory of all AI systems in use across the organization. This includes not just sophisticated machine learning models but also simpler predictive algorithms, automated decision rules, and third-party tools that may qualify as “AI systems” under the broad NAIC definition. The first step in any governance program is knowing what you’re governing.

Establish cross-functional oversight. The NAIC explicitly expects governance committees with actuarial, data science, underwriting, claims, compliance, and legal representation. If your organization hasn’t established this structure, 2026 is the year to do it - before the examination tool is deployed in your state.

Document everything. Regulatory examinations will focus on documentation: model development methodology, data sources, validation results, bias testing, consumer impact assessment, and ongoing monitoring. Actuaries who have been developing models informally should formalize their documentation practices now.

Address third-party vendor risk. Audit your vendor contracts for AI-related provisions. Do you have audit rights? Can you access model documentation? Will vendors cooperate with regulatory inquiries? If not, negotiate these provisions before your next examination.

Implement bias testing. Develop and document a process for testing AI systems for unfair discrimination. This doesn’t necessarily require perfect outcomes - regulators understand that actuarially justified risk differentiation is appropriate - but it does require demonstrating that you’ve looked for and addressed potential disparate impacts.

Monitor the federal-state dynamic. The executive order, Commerce Department review, and potential DOJ litigation will unfold throughout 2026. Assign someone to track these developments and assess their implications for your compliance program. Don’t freeze your governance efforts pending federal clarity - the state-based framework is the current reality and is likely to endure.