From mapping LLM vendor relationships across the top 20 P&C carriers since GPT-4’s launch, we are seeing the first deliberate multi-vendor architectures take shape in production environments. An IA Capital survey published in May 2026 found OpenAI inside roughly nine out of every ten carrier AI stacks. Google Gemini was absent from carrier deployments entirely. That level of concentration would be unremarkable for office productivity software, but for models making underwriting recommendations, pricing claims, and generating customer-facing communications, it represents a form of operational risk that the largest carriers have started to address explicitly.

Travelers runs OpenAI for its customer-facing AI Claim Assistant while deploying Anthropic Claude to 10,000 internal engineers and data scientists. AIG orchestrates Palantir Foundry alongside Anthropic Claude for multi-agent underwriting workflows. Verisk just built Model Context Protocol connectors that route regulatory-grade analytics through Claude. These are not coincidental procurement decisions. They reflect a deliberate effort to avoid single-model concentration risk, and they raise practical questions about model governance, data sovereignty, and vendor management complexity that most carriers have not yet answered.

This article examines how the dual-vendor pattern works in practice at Travelers and AIG, why model concentration qualifies as an operational risk category, what the regulatory environment looks like for multi-vendor AI governance, and what this means for mid-market carriers choosing their first or second foundation model partner.

Travelers: OpenAI for Customers, Anthropic for Engineers

Travelers has disclosed the clearest dual-vendor AI architecture in the P&C industry. In January 2026, the company announced a partnership with Anthropic to deploy personalized Claude and Claude Code AI assistants to nearly 10,000 engineers, data scientists, analysts, and product owners. One month later, Travelers launched its AI Claim Assistant, an agentic voice tool built on OpenAI’s models and Realtime API for handling auto damage claim calls.

The split is intentional. Travelers CTO Mojgan Lefebvre explained the rationale in a Fortune interview in April 2026: “It’s too early in the AI journey to do everything with one, so from the very beginning, we wanted to partner with the leaders in the area. There are certainly other players, but you also don’t want to have ten different partners.” Lefebvre characterized OpenAI as being at the forefront for conversational AI capabilities, while Anthropic’s analytical, coding, and engineering capabilities are “absolutely ahead of others.”

This is a considered vendor allocation strategy, not a pilot that sprawled into two relationships. Travelers matched each vendor’s comparative strength to a specific workload:

Dimension OpenAI (Customer-Facing) Anthropic (Internal Engineering)
Primary use case AI Claim Assistant: agentic voice for FNOL Personalized coding and analytics assistants
User population External customers calling claims 10,000 engineers, data scientists, analysts
Key capability Real-time voice, conversational fluency Code generation, model development, documentation
Deployment model Agentic (autonomous call handling) Assistive (augmenting human workflows)
Data exposure Customer PII, claim details Internal code, proprietary models, institutional knowledge

Both vendors operate within TravAI, Travelers’ in-house agentic AI platform that integrates multiple generative AI tools with internal systems through the company’s own data governance, security, and access controls. As of Q4 2025, over 20,000 Travelers professionals used AI tools regularly, with CEO Alan Schnitzer noting that “millions of transactions are now automated” across the organization. The company’s $1.5 billion annual technology budget, with nearly half directed toward strategic initiatives, provides the infrastructure that makes a dual-vendor approach operationally feasible.

The Workload Segmentation Logic

The customer-facing versus internal split is not arbitrary. It reflects different risk profiles. The AI Claim Assistant handles policyholder interactions where tone, regulatory compliance, and customer satisfaction are paramount. A model failure in this context means a bad customer experience, potential E&O exposure, and reputational damage. The internal Anthropic deployment accelerates software development and analytics where errors are caught in code review and model validation before they affect external operations.

This separation also simplifies compliance. If a regulator asks Travelers to demonstrate how its customer-facing AI makes decisions, the answer involves a single vendor (OpenAI) and a well-defined scope. The internal engineering tools, while broadly deployed, do not directly make underwriting or claims decisions visible to policyholders, which places them in a different regulatory category in most jurisdictions.

AIG: Palantir Foundry Plus Anthropic Claude for Multi-Agent Underwriting

AIG’s architecture takes a different approach to multi-vendor AI. Rather than splitting by audience (customer-facing vs. internal), AIG layers vendors by function within a unified underwriting workflow.

On the Q1 2026 earnings call on May 1, CEO Peter Zaffino described what he called “the next phase of agentic AI,” using Palantir’s Foundry platform to expand AIG’s ontology and add orchestration capabilities for multiple teams of AI agents. The multi-agent architecture features dedicated agents for submission ingestion, data extraction, risk evaluation against underwriting guidelines, pricing benchmarks against portfolio targets, and a collaboration agent that synthesizes input from the specialized agents.

Palantir provides the data integration, ontology management, and orchestration layer. Anthropic’s Claude powers the language understanding and reasoning capabilities within the agents themselves. The distinction matters: Palantir is the infrastructure and workflow backbone; Claude is the cognitive engine that reads submissions, interprets risk characteristics, and generates underwriting observations.

The production metrics from AIG Assist, deployed across eight business lines, demonstrate what this layered architecture produces. In Lexington middle-market property alone, AIG reported a 30% improvement in quoting more submissions, a 55% reduction in time to quote, and approximately a 40% increase in binding of submissions. These are not pilot results. They come from production workflows processing a portion of AIG’s targeted 500,000 annual E&S submissions.

Claude in Claims Evaluation

Zaffino also disclosed a claims evaluation benchmark conducted by Anthropic: Claude aligned with a professional claims adjuster 88% of the time on a 100-claim review. The model flagged timeline inconsistencies, geolocation mismatches, prior claim patterns, document tampering signals, and coverage gaps. That 88% alignment figure, while based on a limited sample, suggests Claude is already operating at a level where it can meaningfully triage claims for human review rather than simply summarizing them.

The speed trajectory is also significant. Zaffino noted that when AIG began working with Claude, AI agents could operate autonomously for less than an hour. Today, they can run for 30 hours. That expansion in autonomous runtime changes the economics of multi-agent workflows: agents that can sustain complex reasoning across a full underwriting file without human intervention enable fundamentally different process architectures than those requiring frequent check-ins.

Model Concentration as an Operational Risk Category

The dual-vendor strategies at Travelers and AIG are not primarily about getting the best model for each task, though that is a benefit. They are about managing a new category of operational risk: dependence on a single AI vendor for critical business functions.

This framing is familiar to anyone who has managed IT vendor relationships. Carriers learned decades ago that running critical systems on a single database vendor, a single cloud provider, or a single network carrier creates fragility. The same logic applies to foundation models, but with additional dimensions of risk that are specific to LLMs.

Four Dimensions of Model Concentration Risk

Availability risk. If a carrier’s sole LLM provider experiences an outage, every AI-dependent workflow stops simultaneously. Travelers processes roughly 50% of initial loss reports digitally, with AI agents handling approximately 35% of low-complexity claims. A prolonged OpenAI outage would not disable Travelers entirely, since the Anthropic-powered engineering tools operate independently, but it would degrade customer-facing claims operations. A carrier with a single vendor would face simultaneous disruption across all AI-dependent functions.

Model regression risk. Foundation models are updated frequently, and updates do not always improve performance on every task. A model update that improves general reasoning might degrade performance on domain-specific insurance tasks. With two vendors, a carrier can benchmark updates from both providers and route workloads to whichever performs better on a given task at a given time. With one vendor, regression in a single update affects everything.

Pricing and licensing risk. A carrier that deploys a single vendor across 20,000 users has limited negotiating leverage when licensing terms change. A carrier running two vendors can credibly threaten to shift workloads, creating competitive tension that constrains pricing. Lefebvre’s comment about not wanting “ten different partners” suggests Travelers optimized for exactly this: enough vendor diversity to maintain leverage, not so much that management overhead consumes the benefit.

Regulatory and data sovereignty risk. Different jurisdictions may impose different rules on which AI providers can process certain types of data. A carrier operating across multiple states or countries may need the flexibility to route data through different providers depending on regulatory requirements. The NAIC’s Third-Party Data and Models Working Group is drafting a vendor registration framework that will require carriers to disclose AI vendor dependencies, creating additional incentive to avoid single-vendor concentration.

The Systemic Dimension

The IA Capital survey finding that OpenAI appears in 90% of carrier stacks also raises a systemic question. If the majority of P&C carriers rely on the same foundation model provider, a single model failure, a data breach at the provider, or a change in the provider’s terms of service could affect a significant portion of the market simultaneously. This is the AI equivalent of catastrophe correlation: an event that hits many carriers at once rather than one carrier in isolation.

Cyberwrite CEO Perry Carpenter warned in a February 2026 interview with The Insurer that AI vendor concentration creates accumulation risk for the industry. From an actuarial perspective, this is analogous to geographic concentration in catastrophe modeling: the correlation structure matters as much as the individual exposure. A carrier’s own model risk is manageable; the systemic risk of industry-wide dependence on a single provider is harder to price.

The Vendor Ecosystem Is Expanding, Not Consolidating

The dual-vendor pattern at large carriers is emerging alongside infrastructure changes that make multi-vendor strategies more practical for carriers of all sizes.

In May 2026, Verisk announced Model Context Protocol (MCP) connectors that bring its regulatory-grade analytics directly into Anthropic’s Claude. The initial connectors provide conversational access to Verisk Underwriting Intelligence (ISO Indications) and Verisk XactRestore, enabling underwriting and claims professionals to query Verisk data through natural language within Claude. Estimated time savings range from 30 minutes to two hours per estimate for restoration workflows.

The MCP connector approach is significant because it decouples the data layer from the model layer. A carrier using Verisk data does not need to choose between Verisk’s analytics and a preferred foundation model; it can access Verisk through whichever LLM best fits its workflow. This is exactly the kind of interoperability that makes dual-vendor strategies feasible for carriers that lack the engineering resources to build custom integrations for each vendor.

In the same week, Anthropic launched ten pre-built AI agents for financial services, including dedicated insurance claims and underwriting agents, at a New York briefing attended by senior executives from major banks and insurers. Claude scored 88% accuracy against a human expert on insurance claims evaluation out of the box. The pre-built agent approach lowers the barrier for carriers to add a second vendor: instead of building custom integrations from scratch, a carrier already running OpenAI can deploy Anthropic’s insurance-specific agents alongside existing workflows.

Governance Complexity: The Cost of Multi-Vendor AI

The dual-vendor approach is not free. It introduces governance, procurement, and operational complexity that single-vendor strategies avoid. Carriers considering a multi-vendor architecture need to weigh three categories of cost.

Model Governance Across Vendors

ASOP No. 56 requires actuaries to understand the models they rely on, regardless of how those models were built or who operates them. A carrier running two foundation model providers doubles the model governance surface area. Each vendor has different update cadences, different documentation practices, different approaches to safety and alignment, and different transparency about training data and capability changes.

Travelers addressed this by building TravAI as an internal orchestration layer that sits between the foundation models and the business applications. TravAI provides a unified governance interface: the same data access controls, the same audit logging, and the same compliance checks apply regardless of which underlying model processes a given request. This centralized governance approach works, but it requires the engineering capability and budget to build and maintain the orchestration layer itself.

AIG took a different path by using Palantir Foundry as its orchestration layer. Foundry provides the ontology, the agent coordination, and the workflow management, with Claude operating within that managed environment. The governance complexity still exists, but it is partially absorbed by the platform vendor rather than built entirely in-house.

Procurement and Vendor Management

Each foundation model partnership involves contract negotiation, security review, data processing agreements, incident response protocols, and ongoing relationship management. These are not trivial for enterprise AI deployments where the vendor processes sensitive policyholder data, proprietary pricing models, and claims information.

The NAIC’s evolving third-party AI vendor framework adds a regulatory layer. The draft registration regime would require vendors to file descriptions of their models, training data sources, testing methodology, known limitations, and change management practices. Carriers using multiple registered vendors will need to track disclosures from each and ensure their own compliance documentation reflects the full vendor landscape.

Integration and Testing Overhead

Running two foundation models means maintaining two sets of API integrations, two sets of performance benchmarks, two sets of regression tests when models update, and potentially two sets of fine-tuning or prompt engineering approaches. For carriers like Travelers with $1.5 billion technology budgets and thousands of engineers, this overhead is manageable. For a regional carrier with a 20-person IT team, the operational burden of a second vendor may outweigh the risk reduction benefit.

What This Means for Mid-Market Carriers

The dual-vendor pattern at Travelers and AIG does not necessarily prescribe the same approach for all carriers. The calculus depends on scale, AI maturity, and risk tolerance.

Carriers processing fewer than 100,000 claims annually may not have enough AI-dependent workflow volume to justify the governance overhead of two vendors. For these carriers, choosing a single vendor with a strong insurance vertical and building in contractual flexibility to switch is likely more practical than maintaining parallel integrations.

Carriers with $500 million or more in net written premiums that are deploying AI across multiple functions (underwriting, claims, customer service, engineering) should consider whether the risk of single-vendor concentration justifies adding a second provider. The key question is not whether both vendors are better at everything, but whether a failure at one vendor would create unacceptable operational disruption.

Carriers in the early stages of AI adoption should focus on one vendor, build the internal governance framework, and design that framework to be vendor-agnostic from the start. Travelers built TravAI before choosing its foundation model partners. That sequencing, building the orchestration and governance layer first, makes adding a second vendor later far less disruptive than trying to retrofit governance across two vendors simultaneously.

The Build-vs-Buy Decision Evolves

The traditional build-versus-buy framework assumed carriers chose one approach. The emerging reality is that the largest carriers are doing both: building internal orchestration layers while buying foundation model capabilities from external providers. The decision is no longer binary. It is about where in the stack to build (data governance, workflow orchestration, domain-specific logic) and where to buy (foundation model inference, pre-built agents, general reasoning capability).

A Carrier Management analysis from April 2026 argued that the winning AI strategy requires a “plug-and-play operating model” where carriers can swap foundation model providers without rebuilding their entire AI infrastructure. Travelers’ TravAI and AIG’s Palantir Foundry integration are early implementations of this principle. Carriers that hard-code dependencies on a single model provider into their core systems will find it progressively more expensive to diversify later.

Challenges to the “Winner-Take-All” Narrative

The dual-vendor pattern at top carriers challenges a common assumption in enterprise AI: that one foundation model provider will eventually dominate each industry vertical. The IA Capital survey showing 90% OpenAI penetration supports the winner-take-all narrative on its surface, but the behavior of the largest, most sophisticated buyers tells a different story.

Travelers explicitly decided that the AI market is too early to consolidate around one provider. Lefebvre’s framing, partnering with “the leaders in the area” rather than picking a winner, suggests she expects the competitive landscape to continue shifting. AIG’s architecture, with Palantir as an intermediary, further insulates AIG from foundation model competition: if a better model emerges, AIG can swap the reasoning engine within Foundry without rebuilding its ontology or agent workflows.

The Anthropic side of the market is also evolving rapidly. In May 2026, Anthropic launched a $1.5 billion joint venture with Blackstone, Goldman Sachs, and other financial services firms, and rolled out ten pre-built agents purpose-built for banking and insurance workflows. This direct investment in insurance-specific capability erodes OpenAI’s first-mover advantage in carrier stacks and gives CIOs a credible second option with domain-tuned functionality.

For actuaries modeling the competitive dynamics of carrier AI adoption, the dual-vendor pattern suggests that foundation model selection is becoming less of a winner-take-all market and more of a best-tool-for-each-task market. That fragmentation increases the importance of vendor-agnostic governance frameworks, which is precisely where the actuarial profession can add value through ASOP No. 56 compliance and model validation standards.

Why This Matters for Actuaries

The emergence of dual-vendor AI stacks creates specific implications for actuarial work at three levels.

Model validation scope expands. Actuaries responsible for validating AI-assisted models now need to understand how multiple foundation models contribute to a single pricing, reserving, or underwriting decision. At AIG, a submission processed by multi-agent AI involves Palantir’s data orchestration and Claude’s reasoning in an integrated workflow. Validating the output requires understanding both components and how they interact. ASOP No. 56 does not distinguish between single-model and multi-model systems; the actuary’s responsibility is the same regardless of the number of vendors involved.

Expense ratio analysis requires vendor-level granularity. As carriers like Travelers embed AI from multiple vendors into claims processing, engineering, and underwriting, the expense savings attributable to AI become harder to decompose. Travelers’ expense ratio improved from 31.5% to 28.5% over nine years with rising technology spend. For actuaries benchmarking carrier efficiency or building rate indications, understanding which portion of the AI expense reduction comes from customer-facing tools (OpenAI) versus engineering productivity (Anthropic) matters for assessing sustainability and replicability.

Vendor concentration becomes a risk factor in carrier assessments. Rating agencies and regulators are beginning to consider technology dependencies in carrier risk profiles. A carrier with a single AI vendor across all functions has a different risk profile than one with deliberate vendor diversification. For actuaries involved in ERM or ORSA submissions, AI vendor concentration may need to appear alongside traditional operational risk factors like IT system dependencies, outsourcing arrangements, and key person risk.

The dual-vendor pattern also signals where the actuarial profession needs to invest in its own capabilities. Actuaries who understand the differences between foundation models, who can evaluate model governance across vendors, and who can assess the operational risk implications of AI vendor concentration will be better positioned as carriers scale these architectures from current levels to full enterprise deployment.

Further Reading on actuary.info

Stay ahead with daily actuarial intelligence - news, analysis, and career insights delivered free.

Subscribe to Actuary Brew Browse All Insights