From reviewing claims AI vendor pitches and carrier deployment timelines over the past two years, the pattern is consistent: adoption is broad but shallow. A March 2026 Sedgwick report, "Future-ready property claims: Leveraging technology and AI for a strategic advantage," puts numbers on what the industry already suspects. Between 58% and 82% of insurers now use AI tools somewhere in their claims operations. Yet only 12% report mature AI capabilities, and just 7% have achieved what Sedgwick defines as scalable AI success.
Those numbers deserve more scrutiny than they typically receive in trade press coverage, which tends to frame the 82% figure as evidence that AI is "transforming insurance." The 75-percentage-point gap between adoption and scalability tells a fundamentally different story. Carriers have layered AI onto individual steps in the claims process without building the architectural foundations needed to connect those tools into coherent workflows. The bottleneck is not whether AI works. It is whether carriers can make it work across an entire operation rather than in isolated pockets.
This article unpacks the maturity gap, identifies the structural barriers preventing scale, and examines the regulatory overlay that adds another dimension of complexity to an already fragmented landscape.
The Sedgwick Data: Adoption vs. Maturity in Three Tiers
Sedgwick's report segments insurer AI maturity into three tiers that reveal where the industry actually stands.
Tier 1: Broad adoption (58-82%). The widest measure captures any carrier using AI in at least one claims function. At this level, AI typically means data extraction from incoming documents, automated customer interactions through chatbots, or basic triage routing. These are point solutions, often vendor-provided, that handle a single step in a multi-step workflow. An 82% adoption rate at this tier is consistent with what we see in carrier technology presentations: nearly everyone has something deployed, but "deployed" often means a single tool handling a narrow task.
Tier 2: Mature capabilities (12%). These carriers report AI systems operating reliably across multiple claims functions with consistent data flows between tools. Maturity at this level implies that AI outputs from one step feed into the next without manual re-entry, that data quality is sufficient to support automated decisions, and that governance frameworks exist to monitor AI performance over time. The drop from 82% to 12% quantifies the gap between buying an AI tool and integrating it into operations.
Tier 3: Scalable success (7%). The smallest group has achieved what the rest are pursuing: AI that operates at enterprise scale across lines of business and claim types, with measurable improvements in processing speed, accuracy, and cost. The further drop from 12% to 7% suggests that even carriers with mature individual AI deployments struggle to replicate that maturity across their full portfolio of business lines.
David Guaragna, Sedgwick's managing director of property operations, frames it directly: "Strategy isn't optional; it's the new competitive advantage." The implication is that the 93% of carriers below the scalable tier are not failing at technology. They are failing at strategy, specifically the architectural and organizational decisions that determine whether individual AI tools add up to something greater than the sum of their parts.
Where the Gains Are Real: FNOL and Low-Severity Claims
Within the narrow band of carriers achieving measurable AI results, the improvements are significant. The Sedgwick data identifies two areas where AI has produced clear, quantifiable gains in claims operations.
FNOL intake acceleration. Intake automation has compressed average claim processing initiation from 10 days to 36 hours, a reduction of roughly 85%. This reflects AI's strength in structured data capture: extracting policy numbers, loss descriptions, claimant information, and coverage triggers from incoming submissions (phone, web, email, photos) and populating claims systems without manual keying. For property carriers handling high-frequency, low-complexity claims like hail damage or water losses, the speed improvement is material. Faster FNOL means faster inspection scheduling, faster reserve posting, and faster communication with policyholders, all of which reduce cycle time and improve customer retention.
Low-severity claims processing. Carriers deploying AI for low-severity claims report processing speeds up to 80% faster than manual workflows, with 50% productivity gains in documentation tasks. AI-powered photo analysis has boosted claim handling efficiency by up to 54%, according to the Sedgwick data. These figures align with what we track across carrier earnings disclosures: the strongest AI ROI consistently appears in high-volume, low-complexity claim segments where the decision logic is relatively standardized and the data inputs (photos, estimates, weather reports) are structured enough for automated processing.
The straight-through processing horizon. Sedgwick projects that 80-85% of simple claims could eventually reach straight-through processing with minimal human involvement. That projection implies a fundamental restructuring of claims operations where adjusters handle only complex, high-severity, or litigated claims while AI manages the volume work. Currently, claims handlers spend roughly 30% of their time on low-value administrative tasks. Eliminating that drag through automation would free capacity for the judgment-intensive work that drives loss outcomes.
| Metric | Before AI | After AI | Improvement |
|---|---|---|---|
| FNOL intake processing | 10 days | 36 hours | ~85% faster |
| Low-severity claims cycle time | Baseline | AI-assisted | 80% faster |
| Photo-based claim handling | Manual review | AI-powered analysis | Up to 54% efficiency gain |
| Documentation productivity | Baseline | AI-assisted | 50% productivity gain |
| Adjuster time on low-value tasks | ~30% of workday | Targeted for automation | Capacity reallocation |
These gains are real, but they share a common limitation: they occur within individual steps of the claims lifecycle. The FNOL improvement does not automatically translate into faster settlement if the downstream damage assessment, coverage verification, and payment authorization steps still run on legacy systems with manual handoffs. This is the fragmentation problem.
The Fragmentation Problem: Why Vendor Sprawl Blocks End-to-End Automation
Nearly two-thirds of carriers in the Sedgwick survey acknowledge a gap between their AI vision and their current reality. The root cause is architectural: carriers have assembled AI capabilities from multiple vendors, each handling a different piece of the claims workflow, without building the integration layer that would connect those tools into a continuous process.
The typical carrier claims technology stack in 2026 looks something like this: one vendor handles FNOL intake and triage, another provides photo-based damage estimation, a third manages document extraction for coverage verification, a fourth runs fraud detection models, and a fifth generates settlement recommendations. Each tool may perform well within its scope. But the data flowing between them is inconsistent, formatted differently, and often requires manual reconciliation at each handoff point.
The data consistency problem. With multiple AI tools operating across the claims process, carriers' data is often inconsistent, incomplete, or siloed across systems, which weakens AI outputs and decisions. This is not a hypothetical concern. When a damage estimate generated by Vendor A's photo analysis tool feeds into Vendor B's settlement recommendation engine, the two systems may use different field definitions, different damage categories, or different severity scales. The result is that downstream AI tools receive inputs they were not trained on, producing outputs that require human review and correction, which negates the efficiency gains the automation was supposed to deliver.
Line-of-business complexity. Insurance is not a single product. A carrier writing personal auto, homeowners, commercial property, general liability, and workers' compensation handles five fundamentally different claim types with different data requirements, different regulatory frameworks, and different settlement patterns. An AI tool trained on personal auto photo damage estimation does not generalize to commercial property roof damage assessment. This line-of-business specificity means that scaling AI across a carrier's full portfolio requires either building or buying separate models for each line, each with its own training data, validation framework, and governance structure.
Legacy system constraints. The Sedgwick report identifies a structural barrier that compounds vendor fragmentation: legacy claims systems lack the API connectivity required for modern AI integration. Rather than being embedded into core workflows, AI tools are "layered on top of existing platforms," creating an additional system layer that increases complexity rather than reducing it. From tracking carrier technology modernization timelines, this pattern is common. The carriers with the oldest policy administration and claims systems face the highest integration costs, which creates a paradox: the carriers that could benefit most from AI automation are often the ones least equipped to implement it at scale.
The Plug-and-Play Argument: Modular AI vs. Fixed Investment
A Carrier Management executive viewpoint published in late April 2026 offers a strategic framework for addressing the fragmentation problem. Kurt Diederich, CEO of insurance software provider Finys and a 25-year veteran of insurance technology, argues that carriers should treat AI as a modular capability within a "plug-and-play operating model" rather than as a fixed, monolithic investment.
Diederich's core argument is that the insurance industry has shifted from an innovation constraint to a selection and lifecycle management challenge. Most core AI use cases, including submission intake, underwriting support, claims triage, document processing, and customer service augmentation, have already been developed and are available from multiple vendors. The question carriers face is not whether AI can be applied to a given workflow but which solution to implement, when, and how to manage the inevitable replacement cycle as better tools emerge.
The historical parallel Diederich draws is instructive. He compares the current AI vendor landscape to the early 2000s internet proliferation: high entrant volume, uneven differentiation, and rapid capability evolution. Many of the internet-era insurance technology vendors that attracted significant carrier investment in 2001-2003 no longer exist. The same consolidation pattern will likely play out in the AI vendor market, which means carriers making large, fixed investments in specific AI platforms risk accumulating technical debt when those platforms are acquired, deprecated, or surpassed.
The plug-and-play alternative structures AI deployment so that individual components can be evaluated, implemented, and replaced with minimal disruption to the broader technology ecosystem. This approach requires standardized integration interfaces (APIs and data contracts), clear performance benchmarks for each AI component, and governance processes that allow for systematic vendor evaluation on a rolling basis rather than multi-year procurement cycles.
From an actuarial perspective, the plug-and-play model has a direct implication for how carriers allocate technology spending. Rather than capitalizing a large AI platform investment and amortizing it over five to seven years, carriers would expense smaller, more frequent AI tool deployments as operating costs. This changes the expense ratio arithmetic: instead of a front-loaded capital investment followed by projected efficiency gains (the J-curve pattern Morgan Stanley describes), carriers would see a steadier expense profile with more immediate, measurable returns from each modular deployment.
Data Silos as the Core Architectural Constraint
Beneath the vendor fragmentation and the modular-vs-monolithic strategy debate lies a more fundamental challenge: data architecture. Every source we reviewed for this analysis, from Sedgwick's claims-specific findings to SAS's industry-wide predictions to Grant Thornton's governance survey, identifies data quality and accessibility as the binding constraint on AI scalability.
The problem has two dimensions.
Internal silos. Carrier data is typically organized by line of business, with separate systems, separate data dictionaries, and separate governance for personal lines, commercial lines, specialty, and reinsurance. A carrier's personal auto claims data lives in one system, its commercial property data in another, and its general liability data in a third. Building an AI model that operates across these silos requires either a data lake architecture that normalizes the inputs or separate models for each silo, both of which require significant investment in data engineering before any AI model can be trained.
External inconsistency. AI tools ingest data from external sources: weather services, building databases, medical cost indices, fraud watchlists, telematics providers, and IoT sensors. Each external data source has its own format, update frequency, and quality characteristics. When multiple AI tools pull from different external sources for the same claim, the resulting data inconsistencies propagate through the AI decision chain, producing conflicting outputs that require human adjudication.
SAS's December 2025 industry predictions capture where this data architecture challenge leads. Franklin Manchester of SAS projects that a Fortune 500 insurer will begin phasing out traditional policy administration systems entirely, replacing them with AI copilots that interact directly with unified data layers. If that prediction materializes, it represents the logical endpoint of the data integration challenge: rather than connecting AI tools to legacy systems through middleware, carriers would rebuild their core data architecture around AI-native structures. The investment required for that transition explains why only 7% have reached scale. Building the data foundation is harder, slower, and more expensive than buying AI tools.
The Regulatory Overlay: 23 States and a Vendor Registry
Layered on top of the architectural challenges is a regulatory framework that is evolving rapidly and adding compliance requirements to every AI deployment.
NAIC Model Bulletin adoption. By late 2025, 23 states and Washington, D.C., had adopted the NAIC's 2023 AI Model Bulletin in some form. The bulletin establishes principles-based expectations for insurer AI governance, covering transparency, fairness, accountability, and risk management. While it is not prescriptive in the way a detailed regulation would be, its adoption across 24 jurisdictions creates a baseline expectation that carriers must demonstrate AI governance capabilities during market conduct examinations.
The AI Systems Evaluation Tool. The NAIC's Spring 2026 meeting in San Diego (March 22-25) advanced several AI oversight initiatives. Multiple states have launched or plan to launch pilot programs using an NAIC-designed assessment tool that evaluates insurer AI systems, data sources, governance practices, and high-risk use cases. State regulators selected participating insurers based on market share, lines of business, and anticipated AI reliance, with pilots focusing primarily on P&C and life insurance providers. The tool operationalizes what the model bulletin describes in principle: examiners can now systematically evaluate how carriers develop, deploy, and monitor AI systems during regular examination cycles.
Third-party vendor registry. The NAIC is advancing a proposal to establish a registry for AI model and dataset vendors serving insurers. The initiative would give regulators visibility into the third-party tools carriers rely on, while ensuring vendors maintain appropriate governance standards. The registry is not intended to relieve insurers of their existing vendor diligence obligations, but it signals that regulators view the vendor fragmentation problem through a governance lens. When a carrier uses five different AI vendors across its claims workflow, each vendor's governance practices become relevant to the carrier's own regulatory compliance.
Risk taxonomy prioritization. The Spring 2026 meeting also discussed operationalizing a risk taxonomy that assigns varying risk levels to different AI use cases. This framework would help regulators prioritize examination resources toward high-risk applications, particularly AI systems that make or influence underwriting and pricing decisions. The taxonomy approach acknowledges that an AI chatbot handling first-contact customer inquiries presents different risk characteristics than an AI model setting claim reserves or recommending coverage denials.
Agentic AI governance. Regulators at the Spring 2026 meeting addressed material risks associated with agentic AI systems, including accountability assignment difficulties, cascading errors across multiple AI agents, and performance limitations. Recommended mitigation strategies include agent monitoring, clear accountability frameworks, redesigned governance structures, and human-in-the-loop escalation protocols for high-risk scenarios. This is particularly relevant for carriers pursuing the kind of end-to-end claims automation that the 7% scalable tier implies: agentic systems that coordinate multiple AI tools across the claims lifecycle will face the most intense regulatory scrutiny.
The Governance Readiness Gap
Grant Thornton's 2026 AI Impact Survey, based on 950 business leaders surveyed in February and March 2026 (including 100 insurance-specific respondents), reveals how unprepared most carriers are for the regulatory environment described above.
The survey findings paint a picture of an industry that has adopted AI faster than it has built the governance infrastructure to manage it:
- 62% of insurance leaders rate their AI maturity as "scaling across multiple functions," 13 percentage points above the cross-industry average. This self-assessment is notably higher than the 12% mature rate in the Sedgwick data, suggesting that executives may be overestimating their organizations' operational AI maturity.
- Only 24% are "very confident" they could pass an independent AI governance review within 90 days. That means 76% of insurance leaders cannot demonstrate adequate governance on demand, precisely the capability that the NAIC evaluation tool pilots are designed to test.
- 68% say AI controls exist but evidence is "fragmented across teams and tools." This mirrors the operational fragmentation identified in the claims AI data: governance is siloed just as the technology is siloed.
- Only 7% believe their workforce is fully ready for AI adoption, while 39% say frontline employees need the most support. For claims operations, frontline employees are the adjusters who must interact with AI outputs, override AI recommendations when warranted, and maintain the human oversight that 75% of professionals say AI requires.
- 56% name regulatory or compliance uncertainty as a top barrier to scaling AI. With 23 states plus D.C. adopting the NAIC model bulletin and the evaluation tool pilots underway, this uncertainty is likely to increase before it decreases.
The disconnect between self-assessed maturity (62% say they are scaling) and demonstrated governance readiness (24% confident in passing a review) is the governance version of the 82%-vs.-7% adoption-vs.-scale gap. Carriers believe they are further along than their governance infrastructure supports.
Industry Spending and Market Projections
The investment flowing into insurance AI despite these structural challenges is substantial and accelerating.
AI's value in the insurance sector is projected to grow from approximately $10 billion in 2025 to nearly $80 billion by 2032, an eightfold increase over seven years. Roots Automation projects insurance AI spend will grow by more than 25% in 2026 alone, with more than 35% of insurers deploying AI agents across at least three core functions. Grant Thornton's survey finds that 52% of insurance leaders report AI-enabled revenue growth, 50% report cost reduction, and 62% report improved decision-making insights.
These investment figures create a paradox that actuaries should recognize. The industry is spending aggressively on AI while acknowledging that the structural barriers to scale remain unresolved. If 82% adoption produces only 7% scalability, increasing spending without addressing the architectural, data, and governance foundations may simply widen the maturity gap rather than closing it. More AI tools deployed into a fragmented architecture produce more fragmentation, not less.
Roots Automation's own data illustrates this tension. While more than 90% of carriers tested AI in 2025, only 22% reached full production deployment. The prediction that 35% will deploy AI agents across three or more core functions in 2026 implies a significant acceleration, but it also raises the question of whether those deployments will be integrated or simply add another layer to the vendor sprawl.
The Comparison to the Early Internet Boom
Diederich's comparison of the current AI vendor landscape to the early 2000s internet proliferation deserves closer examination because the structural parallels are instructive.
In the early internet era, insurers invested heavily in web-based distribution, online policy administration, and digital claims reporting. Many of those investments went to vendors that no longer exist. The consolidation that followed left carriers with stranded technology investments, costly migrations, and a lasting skepticism of "transformational" technology narratives.
The current AI market exhibits similar dynamics: a large number of entrants with overlapping capabilities, venture-funded companies prioritizing growth over profitability, and rapid capability evolution that renders today's state-of-the-art tomorrow's legacy system. For carriers making AI vendor commitments in 2026, the question is not whether the AI technology works today but whether the vendor will exist, remain competitive, and continue supporting its product in 2030.
The modular, plug-and-play approach Diederich advocates is, in part, a risk management strategy for vendor selection uncertainty. By keeping AI components interchangeable, carriers limit their exposure to any single vendor's business trajectory. This is directly analogous to how carriers manage reinsurance panel risk: diversification across counterparties, standardized contract terms, and the ability to replace capacity at renewal.
Why This Matters for Actuaries
The 82%-vs.-7% gap has specific implications for actuarial work across pricing, reserving, and enterprise risk management.
Loss adjustment expense assumptions. For carriers in the 7% scalable tier, AI-driven claims processing improvements should begin appearing in allocated loss adjustment expense (ALAE) ratios within two to four quarters of deployment. Pricing actuaries working with these carriers can begin incorporating prospective LAE reductions into rate indications, supported by the Sedgwick data showing 80% faster processing for low-severity claims. For the 93% of carriers below the scalable tier, projected AI-driven LAE improvements remain speculative and should be documented carefully under ASOP No. 29 if included in rate filings.
Reserve development patterns. AI-assisted FNOL intake and triage should produce faster initial reserve posting and, if the AI models are well-calibrated, more accurate initial reserves. Reserving actuaries at carriers deploying claims AI should monitor whether the acceleration from 10 days to 36 hours in FNOL processing is changing development patterns in the first reporting period. Earlier, more accurate initial reserves would reduce IBNR volatility and potentially compress the development tail, but this benefit only materializes if the AI models are producing reliable outputs, which brings the data quality and fragmentation issues back into focus.
Model governance under ASOP No. 56. The NAIC's AI Systems Evaluation Tool and the proposed vendor registry directly affect actuaries responsible for model governance. Under ASOP No. 56, actuaries must understand the limitations of models used in their work. When AI tools from multiple vendors feed into claims decisions that ultimately affect loss development, the actuary's governance responsibility extends to the entire chain of AI tools, not just the actuarial models at the end of the workflow. The vendor fragmentation that Sedgwick identifies makes this governance challenge substantially harder: five vendors means five sets of model documentation, five validation frameworks, and five sets of performance metrics to monitor.
Expense ratio projections. The gap between broad adoption and scalable deployment has direct implications for how actuaries project expense ratio trends. The AM Best data showing a 2.4-point P&C expense ratio decline over 11 years (2014-2024) was driven primarily by non-AI factors: remote work and operational consolidation. Morgan Stanley projects an additional 200 basis points from AI by 2030. If only 7% of carriers are currently achieving scalable AI results, that 200-basis-point projection requires a significant acceleration in the scale-up rate over the next four years. Actuaries incorporating prospective AI-driven expense improvements into financial projections or rate filings should stress-test those assumptions against the current 7% scalability rate.
Human oversight requirements. Sedgwick's finding that 75% of claims professionals believe AI requires human oversight, combined with 90% saying AI must be orchestrated across business processes, creates a workforce planning dimension that actuaries should factor into expense projections. AI does not eliminate claims adjusters. At current maturity levels, it changes their role from processing volume to supervising AI outputs and handling exceptions. The cost structure of a claims operation staffed with fewer, higher-skilled adjusters overseeing AI tools is different from one staffed with more adjusters doing manual processing, and that difference flows into expense and LAE assumptions.
From 82% to Scale: What Has to Change
The path from broad adoption to scalable deployment, based on the evidence reviewed across these sources, requires progress on three fronts simultaneously.
Data architecture investment. Before adding more AI tools, carriers need unified data layers that normalize inputs across lines of business and external sources. This is the most expensive and least visible investment, but it is the foundation that determines whether individual AI tools can interoperate. The carriers in the 7% scalable tier have already made this investment. Everyone else is trying to run AI on top of fragmented data.
Governance infrastructure. The 76% of insurance leaders who cannot demonstrate adequate AI governance on demand are exposed to regulatory risk that will intensify as the NAIC evaluation tool pilots expand and the vendor registry takes shape. Building governance frameworks that span multiple AI vendors, document model performance across deployment environments, and maintain audit trails for regulatory examination is organizational work, not technology work. It requires cross-functional governance committees, enterprise AI inventories, and systematic vendor risk management.
Strategic vendor management. The plug-and-play model Diederich describes requires capabilities that most carrier procurement functions do not currently have: standardized AI performance benchmarks, rolling evaluation cycles, and integration architectures that allow component replacement without system-wide disruption. Building these capabilities is a prerequisite for sustainable AI scaling, not a nice-to-have that can be deferred until after the technology is deployed.
The 82% headline number tells a story about technology readiness. The 7% number tells a story about organizational readiness. Closing the gap between them is not a technology problem. It is a data, governance, and strategy problem that will determine which carriers extract lasting value from their AI investments and which accumulate cost without corresponding improvement in underwriting and claims outcomes.
Sources
- Don Jergler, "Carriers Using AI for Claims But Adoption is Fragmented, Report Shows," Claims Journal, March 4, 2026. claimsjournal.com
- "Carriers Using AI for Claims but Adoption Is Fragmented, Report Shows," Insurance Journal, March 23, 2026. insurancejournal.com
- "AI Adoption in Property Claims Remains Fragmented Despite Rapid Growth," Risk & Insurance, 2026. riskandinsurance.com
- "Carriers Using AI for Claims but Adoption Is Fragmented, Report Shows," Carrier Management, March 11, 2026. carriermanagement.com
- Kurt Diederich, "Executive View: AI Strategy in Insurance Requires Plug-and-Play Operating Model," Carrier Management, April 28, 2026. carriermanagement.com
- Sedgwick, "Future-ready property claims: Leveraging technology and AI for a strategic advantage," March 2026. sedgwick.com
- SAS, "Insurance's new operating system for 2026: AI," December 2, 2025. sas.com
- "10 Insurance AI Predictions for 2026: Forecasting the Shift From Promise to Performance," Roots Automation, 2026. roots.ai
- "Key AI, Cybersecurity, and Privacy Takeaways from the NAIC 2026 Spring Meeting," Alston & Bird, April 2026. alstonprivacy.com
- "How the NAIC AI Model Bulletin Is Evolving and Why Insurers Should Prepare Now," Plante Moran, March 2026. plantemoran.com
- "Insurance Insights: 2026 AI Impact Survey Report," Grant Thornton, 2026. grantthornton.com
- "Sedgwick Report: Only 7% of Insurers Scale AI in Claims," TechEdge AI, 2026. techedgeai.com
Further Reading on actuary.info
- Which Carriers Are Converting AI Spend Into Actuarial Results - Cross-carrier ROI scorecard benchmarking AIG, Chubb, Travelers, and Progressive against Alpha FMC's 2026 measurable performance threshold.
- Travelers Deploys Anthropic AI Assistants to 10,000 Staff - Inside the largest carrier-to-foundation-model partnership and the build-vs-buy framework for enterprise AI deployment.
- NAIC Four-Tier AI Risk Taxonomy Redefines Insurer Compliance - How the NAIC's risk-based prioritization framework changes the compliance landscape for carrier AI deployments.
- AI Fraud Detection in P&C: Testing Deloitte's $160B Savings Claim - Actuarial analysis of AI fraud detection ROI, vendor capabilities from Shift Technology and FRISS, and the NAIC evaluation pilot.
- AI Governance Gap in Actuarial Practice - ASOP 56 compliance and model risk management frameworks for AI systems in actuarial work.
Stay ahead with daily actuarial intelligence - news, analysis, and career insights delivered free.
Subscribe to Actuary Brew Browse All Insights