From tracking NAIC working group agendas across seven consecutive meetings, the vendor registry proposal stands out as the first structural move beyond principles-based guidance into supply-chain-level oversight. The NAIC's Third-Party Data and Models (H) Working Group advanced a draft regulatory framework at the Spring 2026 National Meeting in San Diego (March 22-25) that would require AI model vendors to register with state insurance departments before carriers can use their products in consumer-facing functions. This is not a refinement of the 2023 Model Bulletin, which addressed insurer conduct. It is a new regulatory surface that reaches upstream into the vendor ecosystem powering underwriting, pricing, claims automation, and fraud detection across the insurance industry.

The proposal arrives at a moment when the third-party AI vendor market for insurance has grown well beyond the handful of analytics providers that dominated a decade ago. Companies like Verisk, EXL, Quantiphi, and Cape Analytics (now part of Moody's) supply models that directly influence rate filings, underwriting decisions, and claims outcomes at dozens of carriers simultaneously. When a single vendor's pricing model is embedded in five or ten carriers' rating plans, the regulatory question shifts from "Is this insurer using AI responsibly?" to "Who built the model, what governance standards did they follow, and can regulators see the documentation?"

That is exactly the question the vendor registry is designed to answer. This article traces the framework's structure, how it differs from existing regulatory tools, the substantial industry opposition it has generated, and what actuaries working in model validation, pricing, and governance roles should be preparing for now.

Timeline: How We Got Here

The vendor registry did not materialize overnight. The NAIC has been building toward third-party oversight through a series of deliberate steps over the past two years.

In December 2023, the NAIC adopted the Model Bulletin on the Use of Artificial Intelligence Systems by Insurers, which established governance expectations for carriers deploying AI. The Model Bulletin required insurers to maintain documented AI programs, perform ongoing monitoring, and manage third-party vendor relationships. However, it placed the compliance burden entirely on insurers, with no mechanism for regulators to engage vendors directly.

By mid-2025, the NAIC had formed the Third-Party Data and Models (H) Working Group as a dedicated body under the Innovation, Cybersecurity, and Technology (H) Committee. The Working Group's charge was specific: develop a framework that would give state regulators direct visibility into the vendors supplying models and data to insurers.

On December 9, 2025, at the Fall National Meeting, the Working Group exposed the first draft of the Third-Party Data and Model Regulatory Framework, opening a 60-day public comment period through February 6, 2026. The response was substantial: dozens of comment letters arrived from carriers, vendors, trade associations, and consumer groups.

On February 26, 2026, the Working Group held a virtual meeting to review submitted comments. Three weeks later, at the Spring 2026 National Meeting, the Working Group continued refining the framework, signaling that the registration requirement would remain in the proposal despite vocal industry opposition.

What the Registry Framework Actually Requires

The draft Third-Party Regulatory Framework has three core components: registration, governance attestation, and ongoing regulator access. Each operates differently from existing regulatory tools.

Mandatory Vendor Registration

Under the framework, third-party data and model vendors whose products are used in insurer functions with "direct consumer impact" must register with state insurance departments before their products can be deployed by carriers. Registration requires vendors to submit entity information and model documentation. The Working Group has been explicit that this is not a licensure regime, but as Carlton Fields noted, the Working Group "does foresee the registration requirement remaining in the framework as a way to identify, track, and ensure minimum governance requirements related to third-party providers."

The distinction between registration and licensure matters. Licensure would create a formal regulatory regime with ongoing examinations, capital requirements, and potential revocation proceedings. Registration, as currently proposed, creates a transparency mechanism: regulators get to see who is providing models and data, what governance standards those vendors follow, and whether specific models have been flagged or disapproved.

Scope of Coverage

The framework applies when third-party data or models are used in insurer functions that have direct consumer impact. The draft explicitly lists six covered functions:

  • Pricing: Rate development, rating algorithms, and loss cost models
  • Underwriting: Risk selection, classification, and automated decision systems
  • Claims: Claims triage, settlement models, and subrogation analytics
  • Utilization review: Health insurance treatment authorization models
  • Marketing: Targeted marketing and lead scoring using AI-driven segmentation
  • Fraud detection: Pattern recognition and anomaly detection models

This scope covers nearly every function where AI models from third-party vendors are currently deployed in production at insurance carriers. The breadth is intentional: the Working Group is trying to map the full vendor supply chain, not just high-profile use cases.

Governance Program Requirements and Annual Attestation

Registered vendors must maintain a documented governance program and provide an annual attestation confirming adherence to the program's requirements. The framework does not prescribe the specific structure of the governance program, but it establishes minimum expectations for documentation, model testing, and data governance practices.

Third-party model filings receive the same confidentiality treatment as insurer proprietary information and trade secrets. This provision was added in response to early vendor concerns about intellectual property exposure. The framework also gives insurance regulators direct access to third-party data when needed, strengthening oversight beyond what is available through insurer-mediated document requests.

How the Registry Differs from the Model Bulletin and the AI Evaluation Pilot

Insurance AI regulation in 2026 involves three distinct NAIC initiatives operating in parallel. Understanding how they relate to each other is essential for compliance planning.

Dimension Model Bulletin (Dec 2023) AI Evaluation Tool Pilot (Mar-Sep 2026) Vendor Registry Framework (Proposed)
Primary target Insurers Insurers Third-party vendors
Regulatory mechanism Guidance bulletin adopted by states Examination tool used within existing exam authority Registration-based framework with governance requirements
Adoption status ~24 states plus D.C. have adopted 12-state pilot running through September 2026 Exposure draft; comments reviewed; refinement ongoing
Third-party visibility Indirect (through insurer vendor management) Indirect (Exhibit D asks about data sources) Direct (vendors register and attest to governance)
Enforcement basis State regulatory authority over insurers Existing financial and market conduct exam authority New registration requirement (authority still debated)

The critical difference is the regulatory surface. The Model Bulletin and the AI Evaluation Tool pilot both operate through the insurer. If a regulator wants to understand a third-party model, the insurer is the intermediary, producing whatever documentation it obtained from the vendor during its own due diligence process. The vendor registry creates a direct channel between regulators and vendors, bypassing the insurer intermediary for the first time.

This matters because the current system has a well-known limitation: insurers can only produce what vendors choose to share. If a vendor treats its model architecture, training data composition, or validation methodology as proprietary, the insurer's documentation may consist of marketing materials and high-level model summaries rather than the kind of technical detail a regulator (or an actuarial model validation team) would need for a meaningful assessment.

The SR 11-7 Parallel: Lessons from Banking

The insurance industry is not the first sector to grapple with third-party model oversight. The Federal Reserve's Supervisory Letter SR 11-7, issued in April 2011, established model risk management expectations for banking organizations that explicitly cover vendor-supplied models.

SR 11-7 requires banks to validate all models used in material decision-making, including those purchased from or developed by third parties. The guidance states that "vendor models pose unique challenges for validation and other model risk management activities because the modeling expertise may be external to the bank and because some components of the third-party model may be proprietary." Banks are expected to validate vendor models with the same rigor applied to internally developed models, even when proprietary constraints limit access to model internals.

The insurance vendor registry borrows the conceptual framework from SR 11-7 but adds a structural element that banking regulation lacks: a centralized registration mechanism. Banks are expected to maintain their own vendor model inventories and validation programs. The NAIC's proposal would create a regulator-maintained registry that functions as a centralized map of the vendor ecosystem across the entire insurance industry.

This distinction reflects the different market structures. Banking regulators oversee a relatively concentrated set of institutions with large, well-resourced model risk management teams. Insurance regulators oversee thousands of carriers across 56 jurisdictions, many of which lack dedicated model validation functions. A centralized registry provides smaller state departments with a shared resource they could not build independently.

Several features of SR 11-7 are conspicuously absent from the NAIC proposal in its current form:

  • No quantitative validation requirements: SR 11-7 expects banks to perform outcomes testing, sensitivity analysis, and benchmarking on vendor models. The NAIC framework focuses on governance documentation without specifying validation methodologies.
  • No model tiering: SR 11-7 expects validation intensity to scale with model materiality and complexity. The NAIC framework does not yet define risk tiers for vendor models.
  • No ongoing monitoring mandate for vendors: SR 11-7 requires banks to monitor vendor model performance continuously. The NAIC framework requires an annual attestation from vendors, with no ongoing monitoring requirement between attestation periods.

From tracking both the banking and insurance regulatory trajectories, the NAIC's approach appears to be deliberately starting with transparency (registration and attestation) before layering on prescriptive validation requirements. If the banking analogy holds, those requirements will come, but the Working Group is avoiding the political cost of proposing them before the registry itself is established.

Industry Opposition: Five Key Objections

The comment period generated substantial opposition from both carriers and vendors. Based on the Alston & Bird summary of the Spring 2026 meeting and the Mondaq coverage of the Working Group session, the opposition clusters around five arguments.

1. Legal Authority to Regulate Vendors Directly

Multiple commenters questioned whether state insurance departments have the legal authority to impose registration requirements on entities that are not licensed insurers, producers, or other traditional regulated entities. Insurance regulatory authority derives from state insurance codes, which generally authorize oversight of insurers and their contractual counterparts. Whether that authority extends to technology vendors that supply analytical tools, rather than insurance products, is an open legal question that no state has tested in court.

The Working Group has responded by framing the registration as a transparency mechanism rather than a regulatory regime, a distinction that may reduce the legal surface area but will likely be tested if the framework is adopted and a vendor declines to register.

2. Scope Is Too Broad

Industry commenters argued that key terms in the framework, including "data," "model," "third-party vendor," and "direct consumer impact," are defined too broadly. Under the current draft, a vendor that provides a simple data feed used as one input into a carrier's internally developed pricing model could be subject to the same registration requirements as a vendor that supplies a fully packaged underwriting decision engine. Several commenters suggested narrowing the scope to specific high-risk use cases (such as pricing and underwriting) rather than covering all six listed functions.

3. Operational Burden and Cost

Vendors raised concerns about the administrative burden of registering in multiple states, particularly if each state implements the framework with different filing requirements, timelines, and governance standards. The lack of reciprocity or portable registration across states means a vendor operating nationally could face 50+ separate registration processes. For smaller vendors and startups, the compliance cost could be a barrier to market entry, potentially concentrating the vendor market among larger established players who can absorb the overhead.

4. Trade Secret and Confidentiality Exposure

Despite the confidentiality provisions in the draft framework, vendors expressed concern that submitting model documentation to state regulators could expose proprietary methodologies. The insurance regulatory environment has a mixed track record on protecting filed information from public records requests, and vendors whose competitive advantage depends on proprietary model architectures view any filing requirement as a risk.

5. Outcome-Based vs. Process-Based Regulation

A recurring theme in the comment letters was a preference for outcome-based regulation, where oversight focuses on whether AI-driven decisions produce fair and accurate results, rather than process-based regulation, where oversight focuses on how models are built and governed. This philosophical divide echoes the debate around the Model Bulletin and the AI Evaluation Tool. The vendor registry is firmly on the process side: it asks "what governance do you have?" rather than "do your models produce fair outcomes?"

The Working Group indicated willingness to discuss narrowing the registration requirements' applicability based on stakeholder feedback, but it also signaled clearly that the registration concept would remain in the framework. The conversation has moved past "should there be a registry?" and into "how should the registry be scoped?"

Implications for the Vendor Ecosystem

If the framework is adopted in its current form, the impact on the third-party vendor market will vary significantly by company size and business model.

Large Established Vendors

Companies like Verisk, EXL, and Moody's (which acquired Cape Analytics in early 2025) already maintain substantial compliance and governance infrastructure. Verisk launched its Commercial GenAI Underwriting Assistant with explicit human-in-the-loop frameworks, and EXL's patent portfolio includes a Governance Hub with 40+ specialized models designed for enterprise compliance. For these vendors, the registry is an incremental administrative requirement, not a structural challenge. Registration may even serve as a competitive advantage by creating a formal, regulator-recognized credential that smaller competitors cannot easily replicate.

Mid-Size Vendors and Specialists

Companies like Quantiphi, which supplies AI models to carriers including through its Dociphi platform, occupy a middle ground. They have governance capabilities but may not have the dedicated regulatory affairs teams needed to manage multi-state registration. The operational burden of the registry could push these vendors toward consolidation, either being acquired by larger platforms or forming compliance partnerships.

Startups and Emerging Vendors

The registration requirement poses the most significant challenge for early-stage companies. An insurtech startup with a novel claims triage model would need to build governance documentation, register across multiple states, and maintain annual attestation compliance before a single carrier could deploy the model in production. This raises the innovation-chilling concern that multiple commenters flagged: the registry could inadvertently create a barrier to entry that protects incumbents.

What This Means for Actuaries

The vendor registry has direct implications for actuaries in several roles, and the preparation work should start now rather than when a final framework is adopted.

Model Validation and Governance Roles

Actuaries responsible for model validation at carriers will see their scope expand. The current practice at most carriers involves reviewing vendor model documentation during procurement, performing initial validation testing, and conducting periodic reviews. If the vendor registry is adopted, regulators will have independent access to vendor governance documentation, which means they can compare what the vendor tells the regulator against what the vendor told the carrier. Any inconsistencies will surface during examinations.

This creates an imperative for actuarial teams to maintain their own independent validation of vendor models, rather than relying on vendor-supplied documentation as a proxy for validation. ASOP No. 56 already requires actuaries to understand the models they use, including models developed by others, but the practical standard at many carriers has been to accept vendor documentation at face value. The vendor registry raises the bar because regulators will now have a second source of information to compare against.

Pricing Actuaries

Pricing actuaries who incorporate third-party models into rate filings should anticipate additional regulatory questions about model provenance. If a rate filing references a vendor-supplied loss cost model or classification system, the regulator may now have registry information about that vendor's governance program. Questions about model testing methodology, data composition, and validation results could become standard components of rate review, not just when a filing is contested, but as part of routine review.

Chief Actuaries and CROs

At the enterprise level, the vendor registry signals that the NAIC views the vendor supply chain as a systemic risk factor. Chief actuaries and chief risk officers should be evaluating their organization's vendor AI inventory now: how many third-party models are deployed, which vendors supplied them, which functions they support, and what governance documentation exists for each. This inventory is precisely what the AI Evaluation Tool's Exhibit A and Exhibit D are designed to surface, and the vendor registry would give regulators a way to verify the inventory against an independent source.

Practical Steps for Actuarial Teams

Based on the trajectory of the framework and the parallel AI Evaluation Tool pilot, actuarial teams should consider several concrete actions:

  1. Build a vendor AI model inventory. List every third-party model in production, the vendor that supplied it, the business function it supports, and the last date it was independently validated. If this inventory does not exist, that is the most urgent gap to close.
  2. Request vendor governance documentation proactively. Do not wait for the registry to be adopted. Ask vendors for their model development documentation, testing methodology, data governance practices, and any internal model risk management framework they maintain. Vendors that cannot provide this documentation will struggle to register when the time comes.
  3. Review ASOP No. 56 compliance for vendor models. Confirm that actuarial sign-off on vendor models meets the standard's requirements for understanding model limitations, appropriate use, and reliance on others' work. The governance gap between AI deployment speed and standards development is particularly acute for vendor-supplied models that may be updated more frequently than internal review cycles can accommodate.
  4. Map vendor models to the six covered functions. Identify which vendor models would fall within the registry's scope (pricing, underwriting, claims, utilization review, marketing, fraud detection) and prioritize governance review for those models.
  5. Engage with the NAIC comment process. The framework is still being refined. Actuaries and actuarial organizations have an opportunity to shape the final version by submitting comments that address practical implementation concerns, validation methodology expectations, and the appropriate role of actuarial judgment in vendor model oversight.

The Bigger Picture: From Principles to Supply-Chain Mapping

The vendor registry represents a fundamental evolution in how insurance regulators approach AI oversight. The 2023 Model Bulletin established principles. The AI Evaluation Tool pilot is building an examination methodology. The vendor registry maps the supply chain.

These three initiatives form a coherent, if still evolving, regulatory architecture. The Model Bulletin tells insurers what is expected. The Evaluation Tool gives regulators a way to verify compliance. The vendor registry ensures that the upstream providers of AI models and data are visible to regulators, rather than hidden behind insurer-vendor contracts that regulators cannot access.

The Model Bulletin has been adopted by approximately 24 states plus the District of Columbia, according to the NAIC's adoption tracker. The AI Evaluation Tool pilot is running across 12 states through September 2026. The vendor registry framework is still in development, with refinement continuing through the Working Group's interim meetings before a potential adoption vote at the Fall 2026 National Meeting.

From a broader regulatory perspective, the vendor registry also reflects a trend visible across financial regulation: the recognition that systemic risk can concentrate in third-party service providers. When the same vendor's catastrophe model is embedded in dozens of carriers' reinsurance purchasing decisions, or when a single vendor's underwriting algorithm drives risk selection at multiple companies writing the same lines of business, the vendor becomes a source of correlated risk that no single-carrier examination can detect. The registry is the NAIC's first attempt to build visibility into that concentration.

Whether the final framework strikes the right balance between transparency and operational burden remains to be seen. The Working Group has shown willingness to narrow the scope based on feedback, but it has also been consistent in maintaining the core registration requirement. For actuaries, the message is clear: the days of treating vendor-supplied models as black boxes that fall outside the scope of actuarial governance are ending. The question is no longer whether regulators will look upstream, but how far upstream they intend to go.

Sources

  1. Alston & Bird, "Key AI, Cybersecurity, and Privacy Takeaways from the NAIC 2026 Spring Meeting" (April 2026)
  2. Mondaq / Eversheds Sutherland, "NAIC Spring 2026 Meeting: Third-Party Data and Models (H) Working Group" (April 2026)
  3. Mondaq / Eversheds Sutherland, "NAIC Fall Meeting Update: Third-Party Data and Models (H) Working Group Exposes Risk-Based Regulatory Framework" (January 2026)
  4. NAIC, Third-Party Data and Models (H) Working Group
  5. NAIC, Big Data and Artificial Intelligence (H) Working Group
  6. NAIC, Third-Party Data and Models Working Group Materials, Spring 2026 National Meeting (March 23, 2026)
  7. NAIC, Big Data and AI Working Group Materials, Spring 2026 National Meeting (March 24, 2026)
  8. NAIC, Model Bulletin: Use of Artificial Intelligence Systems by Insurers (December 2023)
  9. NAIC, Model Bulletin State Adoption Tracker (updated April 2025)
  10. NAIC, Insurance Topics: Artificial Intelligence
  11. Carlton Fields, "NAIC Working Group Begins Sculpting a Framework to Assess Third-Party Data and Models" (2025)
  12. Holland & Knight, "The Implications and Scope of the NAIC Model Bulletin on the Use of AI by Insurers" (May 2025)
  13. McDermott Will & Emery, "State Regulators Address Insurers' Use of AI: State Adoption Tracker"
  14. Federal Reserve, Supervisory Letter SR 11-7: Guidance on Model Risk Management (April 2011)
  15. Fenwick, "NAIC Expands AI Systems Evaluation Tool Pilot Program to 12 States" (2026)
  16. Actuarial Standards Board, ASOP No. 56: Modeling
  17. Fenwick, "Tracking the Evolution of AI Insurance Regulation"

Further Reading