From analyzing carrier technology disclosures in earnings calls and 10-K filings across the top 20 P&C insurers over the past two years, Travelers’ Anthropic partnership represents a step change in how top carriers are sourcing AI capability. In January 2026, Travelers announced a partnership with Anthropic to deploy personalized AI assistants to nearly 10,000 engineers, data scientists, analysts, and product owners. Each assistant is customized to the individual employee’s role, drawing on Travelers’ proprietary data and institutional knowledge in real time.
This is not a pilot program. Travelers, the second-largest U.S. commercial lines insurer by net written premiums, is embedding foundation-model AI across its core engineering and analytics workforce at a scale no other carrier has publicly disclosed. The deployment sits within a $1.5 billion annual technology budget and arrives alongside record financial results: a Q1 2026 combined ratio of 88.6%, core income of $1.7 billion, and a 19.7% core return on equity.
The partnership also marks a clear position in the central strategic question facing every major insurer: should carriers build proprietary AI systems, partner with foundation model providers, or pursue some hybrid? Travelers chose to partner with one of the leading foundation model companies. AIG, by contrast, announced a collaboration with Palantir to deploy LLM agents through Palantir’s Foundry platform at Lloyd’s. Progressive has built much of its data science infrastructure internally over two decades. Each approach carries different implications for cost structure, speed to capability, and long-term IP ownership.
This article examines the Travelers-Anthropic deployment in detail, places it alongside the AIG-Palantir and Progressive models, builds a framework for the carrier AI build-vs-buy decision, and considers what this shift means for actuarial workflows specifically.
The Anthropic Deployment: Scope and Architecture
According to Travelers’ January 2026 press release and subsequent disclosures during the Q4 2025 earnings call, the Anthropic deployment equips nearly 10,000 employees with personalized Claude and Claude Code AI assistants. These assistants are not generic chatbots. Each one is configured to understand the specific employee’s role, the tools and systems they use daily, and draws on Travelers’ internal data and institutional knowledge.
The target population is significant: engineers, data scientists, analysts, and product owners. These are the employees who build, maintain, and operate the models, pipelines, and analytics platforms that drive pricing, underwriting, claims, and customer service. By putting AI assistants in the hands of the builders rather than just the end users, Travelers is making a bet that accelerating the development layer will compound across every downstream function.
Travelers’ CTO and Chief Operations Officer Mojgan Lefebvre explained the philosophy behind the approach in a Fortune interview published April 15, 2026: “I don’t think a thousand little things will add up.” Rather than spreading AI investment across dozens of small pilots, Lefebvre has directed Travelers toward fewer bets with greater ability to scale. The Anthropic partnership is the clearest expression of that strategy.
Lefebvre also revealed that Travelers partnered with both Anthropic and OpenAI for its foundation model needs. “It’s too early in the AI journey to do everything with one, so from the very beginning, we wanted to partner with the leaders in the area,” she told Fortune. “There are certainly other players, but you also don’t want to have ten different partners.” One month after the Anthropic deployment, Travelers launched its AI Claim Assistant, an agentic tool built on OpenAI that handles customer claim submission questions in conversational natural language.
TravAI: The Internal Platform Layer
The Anthropic partnership sits within a broader internal platform called TravAI, which Travelers built after ChatGPT’s launch in November 2022. TravAI is an in-house agentic AI platform that integrates multiple generative AI tools with Travelers’ internal systems, providing a controlled environment for all 30,000-plus employees to access AI capabilities after completing required training.
The distinction matters architecturally. Travelers did not simply hand Anthropic API keys to 10,000 engineers. Instead, it built an internal orchestration layer (TravAI) that routes requests through its own data governance, security, and access controls, then connects to external foundation models where appropriate. This hybrid approach lets Travelers leverage state-of-the-art foundation models while maintaining control over data flows, model selection, and compliance requirements.
As of the Q4 2025 earnings call in January 2026, over 20,000 professionals at Travelers used AI tools regularly, with dozens of generative AI solutions already in production. CEO Alan Schnitzer characterized the scale by noting that “millions of transactions are now automated” across the organization.
Financial Context: What $1.5 Billion in Technology Spend Produces
Travelers’ technology budget provides essential context for understanding the Anthropic deployment. The company invested $1.5 billion in technology in 2025, with nearly half directed toward strategic initiatives including cloud migration, analytics modernization, data infrastructure, and AI. That strategic technology investment has more than doubled over the past eight years.
The financial returns from this investment are visible in the expense ratio. Despite significantly increasing technology spending, Travelers’ expense ratio improved from 31.5% in 2016 to 28.5% in 2025, a three-point improvement that represents hundreds of millions in annual savings on Travelers’ premium base. The company maintained its full-year 2026 expense ratio guidance at 28.5%.
Q1 2026 results reinforce the point. Travelers reported core income of $1.7 billion ($7.71 per diluted share), a core return on equity of 19.7%, and a trailing 12-month core ROE of 22.7%. The underlying combined ratio came in at 85.3%, and the all-in combined ratio was 88.6%. This was the seventh consecutive quarter with more than $1 billion in underlying underwriting income, according to TIKR’s analysis of the Q1 results.
Segment Performance
The technology investment shows up differently across segments:
Business Insurance posted segment income of $839 million, a first-quarter record, with an underlying combined ratio below 90% for the 14th consecutive quarter. Net written premiums reached $5.8 billion, with renewal premium change of 5.8%.
Personal Insurance posted segment income of $704 million with a combined ratio of 82.9% and an underlying combined ratio of 78.3%, the lowest first-quarter figure for that segment in a decade. This is where technology-driven claims automation is having the most visible impact.
Bond & Specialty Insurance generated $1.1 billion in premiums with 7% growth in the Surety line.
Net investment income reached $833 million after tax, up 9% year over year. Travelers returned $2.2 billion to shareholders in the quarter, including approximately $2 billion in share repurchases, and the board declared a 14% dividend increase to $1.25 per share.
Claims Automation: Where AI Meets the Loss Ratio
While the Anthropic deployment targets the engineering and analytics workforce, Travelers’ most visible AI application is in claims processing. The numbers here are striking and directly relevant to actuarial analysis of loss adjustment expense trends.
Over 50% of all claims to Travelers are now eligible for straight-through processing, and customers adopt it approximately two-thirds of the time. An additional 15% are processed using advanced digital tools. Travelers’ claim call center population is down by a third, and the company is consolidating from four claim call centers to two in 2026.
In February 2026, Travelers launched a natural language generative AI voice agent for first notice of loss. The tool processes phone intakes using conversational AI, and early customer adoption has exceeded expectations according to management commentary. Roughly 50% of initial loss reports are now submitted digitally via the Travelers app; for the remaining calls, customers are routed to the AI Claim Assistant by default.
AI agents now handle approximately 35% of all low-complexity claims, including windshield glass and minor property damage. For catastrophe events, the combination of digital tools and AI processing has contributed to 90% of catastrophe claims closing within 30 days. In 2025, Travelers handled 1.5 million claims, roughly one every 20 seconds.
From a reserving perspective, these efficiency gains flow through loss adjustment expense, improving the loss ratio. The personal lines combined ratio of 82.9% in Q1 2026, with personal lines underwriting profits more than doubling in 2025, reflects both pricing adequacy and technology-driven expense reduction. For actuaries modeling Travelers or benchmarking personal lines operations, isolating the LAE impact of AI-driven claims automation is becoming a material exercise.
Innovation 2.0: Schnitzer’s Long-Term Technology Vision
CEO Alan Schnitzer framed the company’s technology trajectory during the Q4 2025 earnings call with a concept he called “Innovation 2.0.” Over the prior decade, Travelers had developed what Schnitzer described as a competitive advantage built on an “innovation skill set.” Innovation 2.0 represents the next phase: applying that organizational capability to AI and, eventually, quantum computing.
“Over the decade, we developed the competitive advantage of an innovation skill set,” Schnitzer said on the Q4 2025 call. “Now we’re bringing all that Part 1 know-how to Innovation 2.0 at Travelers, powered by AI.”
The framing matters because it positions AI not as a standalone initiative but as the next application of an existing institutional competency. Travelers spent a decade building cloud infrastructure, data lakes, analytics platforms, and a culture of engineering-led innovation. The Anthropic deployment plugs foundation-model capability into that existing infrastructure rather than bolting AI onto legacy systems.
Lefebvre’s measurement framework reinforces this operational mindset. Travelers tracks three categories of AI return: reduction in claim closure time, efficiency gains and cost avoidance from automation, and employee adoption and empowerment metrics. “Anything that you don’t measure can evaporate,” Lefebvre told Fortune.
The Build-vs-Buy Landscape: Travelers, AIG, and Progressive
The Travelers-Anthropic partnership gains strategic significance when placed alongside how other top carriers are sourcing AI capability. Three distinct models have emerged among the largest P&C insurers, each with different cost structures, speed profiles, and IP implications.
Travelers + Anthropic: The Foundation Model Partnership
Travelers’ approach is to build an internal orchestration layer (TravAI) while partnering with leading foundation model providers (Anthropic and OpenAI) for the AI models themselves. The advantages: immediate access to state-of-the-art models, rapid deployment at scale, and no need to recruit or retain the specialized ML research talent required to train large models. The trade-offs: dependency on external vendors for core capability, limited ability to differentiate through model architecture, and ongoing licensing costs rather than owned IP.
The partnership model works particularly well for productivity and developer acceleration use cases, which is exactly where Travelers deployed first. When the goal is “make 10,000 engineers more productive,” a commercial foundation model is faster to deploy and likely more capable than anything a carrier could build internally.
AIG + Palantir: The Platform-Mediated AI Stack
AIG took a different path. In December 2025, AIG announced a collaboration with Palantir to deploy LLM agents through Palantir’s Foundry platform for underwriting at its new Lloyd’s Syndicate 2479, formed with Amwins and Blackstone. Syndicate 2479 began writing on January 1, 2026, managing $300 million in premium from a diversified cross-section of Amwins’ approximately $6 billion in delegated authority premiums.
The Palantir collaboration enables multiple LLM agents to retrieve data rapidly and evaluate defined risk characteristics against the Syndicate’s risk appetite. AIG has built an ontology enabling large language models to access more than four million industry data points. As we covered in our analysis of AIG’s agentic AI underwriting system, AIG has compressed underwriting review times by 5x while improving data accuracy to above 90%.
Where Travelers partnered directly with a foundation model provider, AIG chose an intermediary platform (Palantir Foundry) that orchestrates multiple AI models. This adds a layer of abstraction and cost, but it also provides the data integration, ontology management, and workflow orchestration capabilities that are critical for complex underwriting workflows. For AIG’s use case (processing E&S submissions against nuanced risk appetite criteria), the platform-mediated approach may be a better fit than raw foundation model access.
Progressive: The In-House Data Science Tradition
Progressive represents the third model: building proprietary data science capability over decades. Progressive has been a data-driven insurer since the 1990s, pioneering usage-based insurance through Snapshot and accumulating tens of billions of driving miles of behavioral data. Its pricing models, claims algorithms, and risk selection tools were largely built internally by a data science team that predates the current AI wave by 20 years.
A Morgan Stanley analysis from early 2026 estimated Progressive’s agentic AI automation rate at 20.7%, with the lowest average salary among 16 carriers studied but the largest workforce and highest pre-AI earnings. Progressive’s projected earnings uplift from AI by 2030 was 8%, below the 11% industry average, in part because its operations are already highly optimized.
The in-house model’s advantage is IP ownership and deep integration with proprietary data. The disadvantage in the current moment is speed: foundation models are improving so rapidly that internal development can fall behind the capability frontier. Progressive’s investor day presentations in early 2026 emphasized AI strategies and successes, but the company has not disclosed a foundation model partnership comparable to Travelers-Anthropic.
A Framework for the Build-vs-Buy Decision
Patterns across these three approaches suggest a framework for how carriers should think about AI sourcing:
| Factor | Partner (Travelers model) | Platform (AIG model) | Build (Progressive model) |
|---|---|---|---|
| Speed to deployment | Fastest: weeks to months | Moderate: months to quarters | Slowest: quarters to years |
| IP ownership | Low: model IP stays with vendor | Medium: workflow/ontology IP owned | High: full model IP ownership |
| Talent requirements | Engineers who use AI tools | Platform integration specialists | ML researchers and infrastructure |
| Ongoing cost structure | Licensing/API fees; scales with usage | Platform license + usage fees | Fixed team cost; compute scales |
| Best fit | Productivity, dev tools, general AI | Complex workflows, data orchestration | Proprietary data moats, mature teams |
| Vendor lock-in risk | Moderate: can swap models | High: deep platform dependency | None: fully internal |
The emerging pattern among top-10 carriers appears to be hybrid: build an internal orchestration layer, partner with one or two foundation model providers for general capability, and reserve proprietary development for use cases where unique data provides a lasting competitive advantage. Travelers’ TravAI platform plus Anthropic/OpenAI partnerships exemplifies this hybrid approach.
Implications for Actuarial Workflows
Travelers did not announce the Anthropic deployment as an actuarial initiative, but the implications for actuarial work are substantial when you consider who received the assistants and what those people build.
Pricing Model Development
Engineers and data scientists who build and maintain pricing models are among the 10,000 receiving AI assistants. Code generation, model documentation, exploratory data analysis, and feature engineering are all tasks where foundation model assistants can materially accelerate output. If a pricing actuary’s request for a new model feature previously required three weeks of data engineering queue time, an AI-assisted engineer might deliver it in days. This compresses the pricing model development cycle in ways that affect how quickly rate indications can incorporate new variables.
Reserve Analysis and Reporting
Reserve analyses involve substantial data manipulation, validation, and documentation. AI assistants that can write SQL queries, generate Python scripts for triangle development, draft actuarial memos, and automate report formatting could reduce the hours-per-reserve-study metric significantly. For a company handling 1.5 million claims annually across multiple lines, the productivity multiplier compounds across every quarterly reserve review.
Claims Analytics
The claims automation already visible at Travelers (50%+ straight-through processing eligibility, AI voice agents for FNOL) generates data that feeds back into actuarial models. As AI-processed claims generate different data patterns than human-processed claims, actuaries need to account for potential shifts in reporting patterns, settlement timing, and severity distributions. The 35% of low-complexity claims handled by AI agents may develop differently than their human-processed predecessors, requiring adjustments to development factors and IBNR estimates.
Model Validation and Governance
There is an important tension here. ASOP No. 56 holds actuaries responsible for the models they use, regardless of how those models were built. When AI assistants help engineers build pricing models faster, the actuarial validation function needs to keep pace. Faster model development without correspondingly faster validation creates a governance gap. Travelers’ actuaries will need frameworks for validating models that were co-developed with AI assistants, including assessing whether AI-generated code introduces biases or errors that traditional code review might not catch.
What the Combined Ratio Tells Us (and What It Doesn’t)
It is tempting to draw a direct line from Travelers’ AI investments to its 88.6% combined ratio. The reality is more nuanced.
Travelers’ Q1 2026 results reflect pricing adequacy built over several years of hard-market rate increases, favorable prior-year reserve development of $413 million, and strong investment income ($833 million after tax). Technology and AI contribute through expense ratio improvements and claims efficiency, but the underwriting result is primarily a function of pricing discipline and loss selection.
That said, the expense ratio trajectory is genuinely attributable to technology investment. A three-point improvement from 31.5% to 28.5% over nine years, sustained despite rising technology spending, indicates that the return on technology investment exceeds its cost. On Travelers’ 2025 net written premium base, each expense ratio point represents roughly $400 million in annual expense, suggesting the three-point improvement yields over $1.2 billion annually in expense savings relative to the 2016 baseline.
For actuaries benchmarking carrier efficiency, Travelers presents an interesting case study. The company is spending more on technology each year ($1.5 billion in 2025) while simultaneously reducing its expense ratio. This is possible because technology spending displaces other costs (manual processing, call center staff, duplicative workflows) at a ratio greater than one-to-one. The question for ratemaking actuaries: how much of this expense improvement is sustainable, and how much should flow through to rate indications?
Competitive Intelligence: How Other Carriers Compare
Travelers is not the only carrier scaling AI aggressively. A brief survey of major carrier AI strategies provides competitive context:
AIG has deployed LLM agents through Palantir Foundry for underwriting at Lloyd’s Syndicate 2479, built an ontology with four million data points, and is on track to process 500,000 E&S submissions. AIG’s approach emphasizes underwriting precision over workforce productivity.
Chubb has announced plans to reduce headcount by 20% through AI automation, taking a more direct cost-cutting approach than Travelers’ productivity-enhancement framing.
Aviva has moved toward agentic AI applications including automated quote generation and claims processing in the UK market.
Allstate has disclosed AI applications across claims and customer service, with a Morgan Stanley analysis estimating its agentic automation rate at approximately 20%, comparable to Travelers and Progressive.
The Morgan Stanley study projected that AI could cut P&C expense ratios by 200 basis points across the industry and generate $9.3 billion in operating income by 2030. As we analyzed in our breakdown of that forecast, the projections assume implementation costs that may be understated for smaller carriers without Travelers’ engineering infrastructure.
What to Watch: Three Signals for the Next 12 Months
Travelers’ Anthropic deployment is still in its first year. Several signals will indicate whether this model is delivering the returns that justify its scale:
Expense ratio trajectory through 2026. If Travelers can improve below 28.5% while maintaining or accelerating technology spend, it would validate the AI-for-productivity thesis. Watch the Q2 and Q3 2026 earnings calls for updated guidance.
Claims automation expansion. Travelers has said its AI Claim Assistant is being extended to additional lines. The pace of that expansion, and whether customer satisfaction metrics hold, will test how far AI-first claims processing can go before diminishing returns set in.
Competitive follow-on partnerships. If the Travelers-Anthropic model proves out, expect to see similar foundation model partnerships announced by other top-10 carriers. The build-vs-buy debate may shift decisively toward “partner” for productivity use cases while reserving “build” for proprietary data applications. Early signs of this convergence are already visible in how Lefebvre described the partnership philosophy: work with the leaders, limit to a few deep relationships, avoid scattering investment across ten vendors.
Why This Matters for Actuaries
The Travelers-Anthropic deployment matters for actuaries at three levels.
First, for actuaries at Travelers, the deployment will change how fast models get built, tested, and deployed. The actuarial control cycle (data collection, modeling, validation, reporting) will compress, requiring actuaries to adapt their review cadences and validation frameworks accordingly.
Second, for actuaries at competing carriers, Travelers is setting a benchmark. If AI-assisted engineering produces materially better pricing models faster, carriers without similar capabilities face a competitive disadvantage in rate adequacy and speed to market. The 88.6% combined ratio is not solely a technology story, but the technology infrastructure that supports it is increasingly difficult to replicate without comparable investment.
Third, for the profession broadly, the Travelers deployment illustrates a pattern that will reshape actuarial roles over the next several years. When engineers have AI assistants that can write code, run analyses, and generate documentation, the actuarial value proposition shifts from technical execution toward judgment, governance, and strategic interpretation. Actuaries who can evaluate AI-generated outputs, design validation frameworks for AI-assisted models, and translate model results into business strategy will be more valuable than those who compete with AI on speed of calculation.
The question is no longer whether carriers will deploy foundation model AI at scale. Travelers has answered that. The question is how the actuarial profession adapts its standards, training, and workflows to maintain its role as the trusted interpreter between complex models and the business decisions they inform.
Sources
- Travelers Investor Relations, “Travelers Partners with Anthropic to Expand AI-Enabled Engineering and Analytics Capabilities” (January 2026)
- Carrier Management, “10,000 Travelers Employees Get AI Assistants via Anthropic Partnership” (January 2026)
- Fortune, “Why Insurance Giant Travelers’ CTO Is Placing Fewer, Bigger Bets on AI” (April 2026)
- Carrier Management, “20,000 AI Users at Travelers Prep for Innovation 2.0; Claims Call Centers Cut” (January 2026)
- Claims Journal, “20,000 AI Users at Travelers Prep for Innovation 2.0; Claims Call Centers Cut” (January 2026)
- TIKR, “Travelers Q1 2026: Core Income Hits $1.7B for a Seven-Quarter Streak” (April 2026)
- BusinessWire, “Travelers Reports Excellent First Quarter Results” (April 2026)
- Business Insurance, “Travelers Announces AI Commitment, 20% Profit Hike in Q4” (January 2026)
- CIO Dive, “Travelers’ Modernization Push Yields Efficiency, Productivity Gains”
- BusinessWire, “AIG to Form Special Purpose Vehicle with Amwins and Blackstone, Launches Collaboration with Palantir on GenAI Capabilities” (December 2025)
- Carrier Management, “AI Claim Assistant Now Taking Auto Damage Claims Calls at Travelers” (February 2026)
- Carrier Management, “Expense Ratio Analysis: AI, Remote Work Drive Better P/C Insurer Results” (January 2026)
- Coverager, “Travelers Leans Into AI with $1.5 Billion Annual Tech Spend”
- Reinsurance News, “For Travelers, the AI Opportunity Is Profound: CEO Alan Schnitzer”
- Investing.com, “Travelers Q1 2026 Slides: Core ROE Hits 19.7% on Strong Underwriting” (April 2026)
Further Reading on actuary.info
- Inside AIG’s Agentic AI Underwriting Machine - How Palantir, Claude, and 4 million data points are reshaping commercial insurance underwriting at AIG.
- Morgan Stanley Projects $9.3B in AI-Driven P&C Savings by 2030 - Carrier-by-carrier breakdown of projected AI earnings uplift, with actuarial stress tests of the cost assumptions.
- Travelers Q1 2026: $325M Prior Year Release and the AY 2025 Uncertainty IBNR - A reserving framework walkthrough connecting Travelers’ financial results to actuarial methodology.
- The AI Governance Gap in Actuarial Practice - When management moves faster than standards: navigating ASOP No. 56 in the age of LLMs.
- Chubb Plans 20% Workforce Cut via AI Automation - How Chubb’s headcount reduction approach compares with Travelers’ productivity-enhancement strategy.
- The AI Patent Race in Insurance: Complete Guide - IP strategy context for how carriers are building vs. buying their AI capabilities.
- Progressive Q1 2026 Results - Financial comparison with the carrier that built its data science capability in-house over two decades.
Stay ahead with daily actuarial intelligence - news, analysis, and career insights delivered free.
Subscribe to Actuary Brew Browse All Insights