From reviewing all 33 RFI comment letters submitted to the NAIC's Big Data and Artificial Intelligence Working Group and tracking the Working Group's meeting minutes over the past year, one pattern becomes unmistakable: the gap between industry rhetoric ("we support responsible AI") and the specific carve-outs requested in formal regulatory comments reveals exactly where the real compliance friction lies. Trade associations that publicly champion AI governance simultaneously argue in their comment letters that the existing Model Bulletin framework is sufficient, that company-size thresholds should exempt smaller carriers, and that third-party vendor obligations should not extend beyond existing contractual arrangements.

This tension sits at the center of one of the most consequential regulatory decisions facing the insurance industry in 2026. The NAIC adopted its Model Bulletin on the Use of Artificial Intelligence Systems by Insurers in December 2023, establishing voluntary guidance on AI governance, transparency, and consumer protection. Twenty-five states have since adopted or referenced the bulletin. But voluntary guidance has inherent limitations: enforcement mechanisms vary widely across adopting states, there is no standardized compliance framework, and non-adopting states create regulatory gaps that carriers operating nationally must navigate.

On May 12, 2025, the Big Data and Artificial Intelligence (H) Working Group took a significant step by releasing a Request for Information asking stakeholders whether the NAIC should pursue a binding model law governing insurers' use of AI. The 45-day comment period, which closed on June 30, 2025, drew 33 written submissions from state departments of insurance, consumer advocacy groups, trade associations, consultants, and technology vendors. The responses illuminate five fault lines that will shape the actual model law text: the scope question (all lines vs. line-of-business adoption), third-party vendor liability, company-size thresholds, the definition of "AI system" itself, and the relationship between a new model law and existing regulatory frameworks like Colorado's SB 21-169.

From Principles to Bulletin to Model Law: The Regulatory Timeline

Understanding the current inflection point requires tracing the NAIC's progression on AI oversight. The journey began with the NAIC Principles on Artificial Intelligence, adopted in August 2020, which established five high-level principles: fairness, accountability, compliance, transparency, and security. These principles were deliberately aspirational, providing a philosophical foundation without prescriptive requirements.

The Model Bulletin, adopted on December 4, 2023, moved substantially further. It directs insurers to develop and maintain written AI governance programs, establish board-level or senior management oversight of AI systems, conduct ongoing testing for unfair discrimination, and maintain documentation sufficient to explain AI-driven decisions to regulators. Critically, as a bulletin rather than a model law or model regulation, it carries no independent statutory authority. Each adopting state must integrate the bulletin's guidance into its existing regulatory framework, leading to significant variation in how the guidance is interpreted and enforced.

As of March 2026, 25 states have adopted the Model Bulletin or issued comparable guidance. Adopting states include Connecticut, Delaware, Hawaii, Kentucky, Maryland, Massachusetts, Nebraska, New Jersey, North Carolina, Oklahoma, and Pennsylvania, among others. The adoption map reveals a geographic and political pattern: states with active consumer protection agendas and larger insurance markets moved earliest, while states with smaller insurance departments and fewer resources have been slower to act. This uneven adoption is itself one of the primary arguments in favor of a model law; proponents argue that binding legislation would create the uniform national framework that voluntary guidance cannot achieve.

The RFI, released on May 12, 2025, asked stakeholders to address several foundational questions: whether uniform statutory requirements are necessary, how a model law should address governance, transparency, and accountability, whether obligations should differ by company size, how third-party AI vendors should be regulated, and whether existing state laws or industry frameworks could serve as templates for national regulation.

Inside the 33 Comment Letters: Where Industry and Regulators Diverge

The 33 comment letters submitted in response to the RFI represent a comprehensive cross-section of insurance industry stakeholders, and the fault lines they expose are instructive. Patterns we have tracked across these submissions reveal that the debate is not simply "for" or "against" a model law. Instead, the disagreements cluster around specific operational questions that will determine whether any resulting legislation meaningfully changes industry practice or merely codifies the status quo.

The Trade Association Position: Bulletin Is Sufficient

The most organized opposition to a model law came from a joint letter submitted by the American Property Casualty Insurance Association (APCIA), the American Council of Life Insurers (ACLI), and the National Association of Mutual Insurance Companies (NAMIC). This joint submission, representing carriers writing the vast majority of U.S. premiums, argued that the Model Bulletin framework remains the appropriate regulatory vehicle for AI oversight. The letter contended that insurers are already subject to extensive regulation through existing laws, including the Unfair Claims Settlement Practices Act, market conduct examination authority, and state-specific unfair trade practices statutes, and that these frameworks provide sufficient legal authority to address AI-related consumer harm without additional legislation.

The APCIA's individual comment letter went further, explicitly stating that "moving forward with a model law is unnecessary at this time" and encouraging the Working Group to "continue focusing on the development of additional guidance for insurers' use of AI within the existing standards, including the model bulletin." This position reflects a clear industry preference for regulatory flexibility: bulletins can be updated quickly, allow for interpretation that accounts for carrier-specific circumstances, and do not carry the enforcement mechanisms (including penalties and compliance mandates) that accompany statutory requirements.

The Academy of Actuaries: Cautious Support for Statutory Framework

The American Academy of Actuaries submitted a comment letter, signed by casualty, health, life, and risk management vice presidents, that took a notably different tone. The Academy agreed that the three-pillar framework presented in the RFI (governance, transparency, and accountability) "is appropriate" and represents "the key overarching themes that should be considered when developing model legislation." This represents a significant actuarial professional body endorsing the conceptual shift from voluntary guidance to statutory requirements.

The Academy's letter also raised a critical definitional concern: the need for precise definitions of "unfair discrimination" and "unethical practices" as applied to AI systems. This is not an abstract semantic exercise. Actuaries rely on these definitions when building and validating predictive models, and ambiguity in the statutory text would create uncertainty about which modeling practices are compliant. The distinction between actuarially justified risk differentiation and unfair discrimination based on protected characteristics is foundational to pricing and underwriting, and a model law that does not draw this line clearly would generate significant compliance friction.

Consumer Advocacy Groups: Stronger Enforcement, Broader Scope

Consumer advocacy organizations, including the Center for Economic Justice and the National Consumer Law Center, pushed for a model law with substantially broader scope and stronger enforcement mechanisms than what the trade associations supported. These comment letters argued that voluntary guidance has not produced meaningful improvements in transparency or accountability, pointing to documented cases where algorithmic decision-making in claims, underwriting, and pricing has produced disparate outcomes for protected groups without adequate regulatory response.

Consumer groups specifically advocated for mandatory pre-deployment testing of AI systems, public disclosure of AI use in consumer-facing decisions, a private right of action for consumers harmed by AI-driven decisions, and requirements that extend to all lines of business rather than a phased, line-by-line approach. These positions represent the most expansive vision for a model law and would, if adopted, create compliance obligations significantly beyond what the current Model Bulletin contemplates.

State Regulators: The Internal Divide

Perhaps the most revealing fault line runs through the NAIC membership itself. Wisconsin Insurance Commissioner Nathan Houdek, who chairs the Big Data and AI Working Group, has acknowledged the division publicly, noting that the Working Group is examining whether to move forward with "some type of model law or regulation" and that "the jury is still out; you see a split in the NAIC membership. Some states think we need to be doing more, and then other states have a very different opinion."

This internal divide reflects genuine disagreements about regulatory philosophy and practical capacity. States with well-resourced insurance departments (California, New York, Colorado, Connecticut) have been at the forefront of AI regulation and tend to favor stronger oversight mechanisms. States with smaller departments and fewer technical staff express concern about the compliance monitoring burden that a model law would impose on regulators themselves, not just on the carriers they oversee. The NAIC's consensus-based decision-making process means that a deeply divided membership may struggle to produce model law text that commands the supermajority support needed for effective national adoption.

The Three Pillars: What a Model Law Would Actually Require

The RFI structured its inquiry around three proposed regulatory pillars: governance, transparency, and accountability. Each pillar, if translated into statutory language, would impose specific obligations on insurers that go beyond the current Model Bulletin's guidance.

Pillar One: Governance

The governance pillar would require insurers to establish board-level or senior management oversight of AI systems, maintain written policies governing AI development, procurement, and deployment, and designate responsible individuals or committees accountable for AI outcomes. This continues a trend we have observed across financial services regulation where firms are expected to demonstrate that technology risk management has the same organizational prominence as underwriting or investment risk management.

For actuarial teams, the governance pillar has direct operational implications. Predictive models used in pricing, reserving, and underwriting would need documented governance frameworks that specify approval workflows, performance monitoring protocols, and escalation procedures for model failures or unexpected outputs. This aligns closely with existing requirements under ASOP No. 56 (Modeling) and the broader model risk management frameworks that many carriers have already adopted. The incremental compliance burden depends heavily on how prescriptive the statutory language becomes. A principles-based governance requirement may add little to what well-run carriers already do; a prescriptive requirement specifying minimum committee structures, reporting frequencies, and documentation formats would impose significant new costs.

Pillar Two: Transparency

The transparency pillar addresses documentation of decision-making processes, model logic, and consumer communications. In practice, this would require insurers to maintain sufficient documentation to explain AI-driven decisions to regulators, and potentially to consumers, in comprehensible terms.

This pillar raises the most technical challenges for actuarial practice. Complex machine learning models, particularly deep neural networks and ensemble methods like gradient-boosted decision trees, do not produce the kind of intuitive coefficient-based explanations that regulators are accustomed to seeing in GLM-based rate filings. Satisfying a statutory transparency requirement could push insurers toward either (a) restricting their use of complex models to applications where transparency is not required, or (b) investing heavily in explainability tools and techniques such as SHAP values, partial dependence plots, and model-agnostic interpretability methods. Both approaches have costs: the first limits the actuarial toolkit unnecessarily, while the second requires substantial investment in technical infrastructure and actuarial training.

The transparency pillar also intersects with trade secret concerns. Several comment letters, particularly from technology vendors and consulting firms, argued that AI transparency requirements must be balanced against the legitimate need to protect proprietary model architectures, training data compositions, and algorithmic innovations. The challenge of requiring meaningful transparency while preserving intellectual property protections is not unique to insurance, but the regulated nature of the industry and the direct consumer impact of AI-driven decisions make this balance particularly consequential.

Pillar Three: Accountability

The accountability pillar would establish measurable performance standards, testing protocols, and regulatory cooperation requirements. It would require insurers to demonstrate that their AI systems produce outcomes consistent with applicable laws, including fair lending and anti-discrimination statutes, and to maintain remediation processes for consumer harm caused by AI-driven decisions.

For actuaries, the accountability pillar has the most direct connection to existing professional standards. ASOP No. 12 (Risk Classification) already requires actuaries to consider whether classification systems produce results that are actuarially sound and do not unfairly discriminate. ASOP No. 56 requires documentation and validation of models used in actuarial work. A model law accountability requirement would elevate these professional standards into statutory obligations, creating enforcement mechanisms that go beyond the Actuarial Board for Counseling and Discipline (ABCD) and extend to state regulatory sanctions, including fines, consent orders, and market conduct actions.

The Five Fault Lines That Will Shape the Model Law Text

Reading across all 33 comment letters, five recurring disagreements emerge as the issues most likely to determine the final content and effectiveness of any model law.

1. Scope: All Lines vs. Line-of-Business Adoption

The RFI explicitly asked whether a model law should apply to all insurance lines simultaneously or be adopted on a line-of-business basis. Consumer groups and several state regulators favor universal application, arguing that AI risks are not confined to any single line and that a patchwork approach would create compliance gaps. Trade associations and many carriers favor a phased approach, noting that AI applications in personal auto underwriting raise different concerns than AI use in commercial property risk assessment or life insurance accelerated underwriting. The line-of-business approach would allow regulators to develop line-specific guidance that accounts for the differing data environments, consumer exposure profiles, and existing regulatory frameworks across P&C, life, and health lines.

2. Third-Party Vendor Liability

The question of how a model law should address AI systems developed and maintained by third-party vendors was among the most contentious topics in the comment letters. Insurers using vendor-supplied models, scoring algorithms, and data enrichment tools often lack visibility into the underlying training data, model architecture, and validation processes. The Model Bulletin already states that insurers are responsible to regulators and consumers for the results of AI models they use, regardless of whether a third party developed the system. But translating this principle into statutory language raises practical questions about contractual audit rights, vendor cooperation mandates, and the allocation of liability when a vendor-supplied model produces discriminatory outcomes.

The NAIC advanced this discussion at its 2026 Spring National Meeting by proposing a third-party AI vendor registry, a framework that would give regulators visibility into the vendors supplying AI models and datasets to insurers. The registry is explicitly "not intended to relieve insurers of their existing vendor diligence and management obligations," but it signals that the regulatory infrastructure for vendor oversight is being developed in parallel with the model law deliberation.

3. Company-Size Thresholds and Compliance Costs

Trade associations argued forcefully in their comment letters that compliance obligations should be scaled to company size. The logic is straightforward: a top-20 national carrier with a dedicated data science team, a chief AI officer, and an existing model risk management function can absorb the compliance costs of board-level governance, comprehensive documentation, and regular model testing far more easily than a regional mutual with $50 million in surplus and no in-house data scientists. Without some form of proportional treatment, trade groups argued, a model law would effectively disadvantage smaller carriers and accelerate market concentration.

The proportionality question is genuinely difficult. AI risk does not scale linearly with company size; a small carrier using a single vendor-supplied credit scoring model may have less AI risk exposure than a large carrier deploying dozens of custom models, but a small carrier's limited governance resources also mean that any problems are less likely to be detected internally. The evaluation tool pilot's use of "proportionality principles," prioritizing examination of high-risk AI systems while de-emphasizing low-risk back-office applications, suggests one framework for implementing scaled requirements, but translating this into statutory language that is both fair and enforceable remains a challenge.

4. Defining "AI System"

Multiple comment letters, including the Academy of Actuaries submission, highlighted the fundamental challenge of defining what constitutes an "AI system" for regulatory purposes. The Model Bulletin uses a broad definition that encompasses machine learning, natural language processing, computer vision, and other techniques. But the definition's breadth creates uncertainty about whether traditional actuarial tools, including GLMs, experience rating plans, and credibility-weighted rate adjustments, fall within scope. A model law that inadvertently subjects conventional actuarial pricing methods to AI governance requirements would impose enormous compliance costs without corresponding consumer protection benefits.

This definitional question has real operational stakes. If a Poisson GLM with twelve rating variables qualifies as an "AI system," then every personal auto rate filing in the country triggers the model law's governance, transparency, and accountability requirements. If the definition is limited to machine learning techniques that learn from data without explicit programming, then a carrier using XGBoost for risk scoring is subject to the law while a carrier using a manually specified GLM with the same rating variables is not, even though both models produce identical pricing outcomes. The comment letters reveal broad consensus that the definition must be clarified, but little agreement on where the boundary should be drawn.

5. Relationship to Existing State Laws

Colorado's SB 21-169, signed into law in July 2021, requires insurers using external consumer data and information sources (ECDIS), algorithms, and predictive models to establish governance and risk management frameworks, conduct ongoing monitoring for unfair discrimination, and (starting July 1, 2026 for auto and health lines) submit annual compliance reports. Colorado's framework represents the most prescriptive state-level AI regulation affecting insurance in the United States, and it serves as both a precedent and a point of contention in the model law debate.

Several comment letters argued that Colorado's approach demonstrates that state-level action can be effective without a national model law. Others argued the opposite: that Colorado's unilateral action creates a compliance patchwork that national carriers must navigate alongside the Model Bulletin, the NAIC evaluation tool, and any future model law, and that a uniform national standard would reduce this complexity. The relationship between a potential NAIC model law and existing state-specific legislation is not merely a policy question; it is a legal drafting challenge that will determine whether states adopt the model law as a floor (allowing more restrictive state requirements) or as a ceiling (preempting state-specific legislation).

The Evaluation Tool Pilot: Building the Enforcement Infrastructure

Running in parallel with the model law deliberation is the 12-state pilot of the AI Systems Evaluation Tool, launched on March 2, 2026 and scheduled to run through September 2026. The participating states are California, Colorado, Connecticut, Florida, Iowa, Louisiana, Maryland, Pennsylvania, Rhode Island, Vermont, Virginia, and Wisconsin. This pilot represents the most concrete regulatory action on AI oversight to date and may prove more consequential than the model law debate itself.

The evaluation tool comprises four exhibits. Exhibit A asks insurers to quantify their AI usage across the organization. Exhibit B assesses the insurer's governance and risk assessment framework. Exhibit C requests detailed information on high-risk AI systems, including those used in underwriting, claims, and pricing. Exhibit D focuses on AI data inputs and sources, with Version 4.0 adding a section on reasonable accommodations and policy modifications.

The pilot's operational approach reflects the proportionality principles that may ultimately inform a model law. Regulators are directed to prioritize examination of high-risk AI systems that pose serious consumer or financial threats while de-emphasizing low-risk, back-office applications. Participating states selected insurers based on market share, business lines, and anticipated AI reliance, with pilots primarily focused on property and casualty and life insurance providers.

Industry reaction to the pilot has been sharp. A December 5, 2025 joint letter from trade groups representing life, health, P&C, mutual, and reinsurance insurers raised five objections: that participation is voluntary for regulators but compulsory for selected companies; that the pilot duration was initially undefined; that the tool's applicability to financial versus market conduct exams was unclear; that companies face potential penalties for "negative" findings during the pilot phase; and that the pilot launched before the final tool version underwent full public comment. The letter recommended that "company participation be voluntary and that information gathered be for development of the tool only and not for compliance purposes."

Iowa Insurance Commissioner Doug Ommen responded to these concerns by noting that "at the conclusion of the pilot period, we'll then hear from the pilot group and consider lessons learned from this tool and consider refinements." The tool is expected to be updated based on pilot feedback in September and October 2026, re-exposed for public review, and adopted at the NAIC Fall National Meeting in November 2026.

The connection between the evaluation tool and a potential model law is direct. If the NAIC ultimately drafts a model law, the evaluation tool would likely serve as the primary enforcement mechanism, giving regulators a standardized framework for examining carriers' AI practices during routine market conduct examinations. The pilot, in effect, is building the enforcement infrastructure for a law that has not yet been written.

The NCOIL Parallel Track

Adding complexity to the landscape, the National Council of Insurance Legislators (NCOIL) has introduced its own Model Act Regarding Insurers' Use of Artificial Intelligence. NCOIL's effort proceeds on a separate timeline from the NAIC's model law deliberation, and the two organizations have different constituencies and legislative processes. Where the NAIC is an association of state insurance commissioners (the regulators), NCOIL represents state legislators who actually draft and pass insurance legislation. A model law from either body has no independent legal force; it serves as a template that individual states may choose to adopt, in whole or in part, through their own legislative processes.

The existence of parallel model law efforts from NAIC and NCOIL creates a risk of conflicting frameworks. If both organizations produce model legislation on AI governance and the texts differ substantially, states will face a choice between competing templates, potentially exacerbating the patchwork problem that a model law is supposed to solve. This is a pattern we have seen in other areas of insurance regulation, including cybersecurity and data privacy, where multiple model frameworks have produced state-level variation rather than uniformity.

What This Means for Actuarial Practice

Whether the NAIC produces a model law in 2026 or continues to refine the bulletin-plus-evaluation-tool approach, the practical implications for actuarial teams are substantial. Several areas require attention now, regardless of which regulatory path prevails.

Model Documentation and Governance

The Model Bulletin already directs insurers to maintain written AI governance programs. The trend toward statutory requirements means that actuarial teams should treat this guidance as a de facto requirement rather than an aspirational suggestion. Documentation should cover the full model lifecycle: development rationale, data inputs and sources, validation methodology, performance monitoring metrics, and model decommissioning criteria. ASOP No. 56 provides a professional framework for this documentation, and actuaries should ensure that their modeling practice aligns with both the ASOP and the emerging regulatory expectations.

Vendor Oversight and Due Diligence

The comment letters and the vendor registry proposal make clear that third-party model risk is a central concern for regulators. Actuaries who use vendor-supplied models for pricing, underwriting, or claims decisions should strengthen their vendor due diligence processes. This includes contractual provisions for audit rights and data access, independent validation of vendor model outputs, ongoing monitoring for model drift and performance degradation, and documentation of the rationale for vendor selection and the scope of vendor model use.

Explainability Investment

The transparency pillar signals that the era of deploying complex models without meaningful explanation is ending. Actuarial teams using machine learning techniques should invest in interpretability tools, including SHAP, LIME, and partial dependence analysis, and develop standard protocols for explaining model outputs to regulators, underwriters, and (potentially) consumers. This investment is prudent regardless of the model law outcome; even under the current bulletin framework, regulators can and do ask questions about how AI-driven decisions are made during market conduct examinations.

Compliance Budget Planning

The comment letters from trade associations consistently highlighted compliance costs as a concern, particularly for smaller carriers. Actuarial departments should be involved in compliance cost estimation, since they are best positioned to assess the operational impact of governance, documentation, and testing requirements on the modeling function. For carriers operating nationally, the cost analysis should account for the possibility of multiple overlapping frameworks: the NAIC model law (or revised bulletin), Colorado's SB 21-169 compliance reporting (effective July 2026 for auto and health), state-specific amendments to any adopted model law, and the NCOIL model act if adopted by different states.

The AI Definition Question and Actuarial Methods

Actuaries should actively engage with the definitional question of what constitutes an "AI system." The Academy of Actuaries has already signaled the importance of this issue in its comment letter. If conventional actuarial methods, such as GLMs and experience rating, are swept into the definition, the compliance burden on pricing and reserving actuaries would increase dramatically. Professional organizations, including the Academy, the CAS, and the SOA, should work with the NAIC to develop definitions that distinguish between traditional actuarial tools and the machine learning and artificial intelligence systems that the regulatory framework is intended to address.

Timeline and Outlook

The regulatory trajectory has several visible milestones. The 12-state evaluation tool pilot runs through September 2026, with tool refinements expected in September and October. The revised tool is slated for re-exposure and public comment, with formal adoption targeted at the NAIC Fall National Meeting in November 2026. The model law deliberation does not have a fixed timeline, but the Working Group's progress depends on whether the membership can bridge the internal divide that Commissioner Houdek has described.

From tracking this process over the past year, several outcomes appear likely. First, the evaluation tool will be adopted in some form at the Fall National Meeting; the pilot infrastructure and multi-state participation create institutional momentum that would be difficult to reverse. Second, the model law debate will continue into 2027, because the fault lines exposed by the 33 comment letters are deep enough to resist quick resolution. Third, regardless of what happens at the NAIC level, state-level activity will continue: Colorado's compliance reporting deadlines are set, additional states will adopt the Model Bulletin, and some states may pursue independent AI legislation without waiting for a national framework.

For actuaries and insurance executives, the strategic implication is clear: treat the Model Bulletin's requirements as the minimum compliance baseline, invest in governance and documentation infrastructure that can accommodate future statutory requirements, and engage actively in the comment and deliberation process through professional organizations. The question is no longer whether AI in insurance will be regulated, but how prescriptive, uniform, and enforceable that regulation will be.

Further Reading