From comparing consulting frameworks across McKinsey, BCG, Deloitte, and Oliver Wyman over three years of insurance AI coverage, the convergence pattern in early 2026 is unprecedented in its specificity. When two firms that normally differentiate their advisory products arrive at structurally the same conclusion within the same quarter, the signal is worth parsing for actuarial leaders whose pricing, reserving, and modeling workflows sit directly in the path of whatever transformation framework their carrier adopts.

BCG’s March 2026 report, “The AI-First Property and Casualty Insurer,” lays out a three-level sequencing model for how P&C carriers should adopt AI. McKinsey’s April 2026 publication, “Can Agentic AI (Finally) Modernize Core Technologies in Insurance?,” introduces a “modernization factory” concept targeting the same carrier audience with a different architectural philosophy. BCG’s separate January 2026 piece on agentic AI in core insurance IT modernization completes a trio of major consulting publications that together define the strategic playbook carriers are now evaluating for board-level AI investment decisions.

BCG’s Deploy-Reshape-Invent Model

BCG categorizes AI use cases into three levels that are intended to be pursued sequentially, though BCG acknowledges that carriers with mature data infrastructure may pursue multiple levels simultaneously across different business functions.

Level 1: Deploy. Embedding generative AI into everyday tasks, enhancing core operations such as underwriting, claims processing, and customer support through automation and improved decision-making. This level focuses on data-intensive, repetitive tasks that require reduced manual effort. Risk mitigation at the Deploy level relies on robust validation processes that cross-check generative AI outputs against key data points. The Deploy level represents the quick wins: carriers can implement these use cases with existing data and technology infrastructure, generating measurable ROI within months rather than years.

Level 2: Reshape. Transforming critical business functions by redefining processes, upskilling talent, and modernizing technology infrastructure. Reshape targets high-impact areas like risk assessment, site inspection, and compliance management. BCG emphasizes a Human in the Loop (HITL) methodology at this level, balancing automation with quality assurance. The Reshape level requires organizational change management because it alters how roles function, not just how tasks execute. Job descriptions change. Reporting structures shift. The actuarial review workflow that took three days with manual data gathering might take four hours with AI-assisted preparation, but the actuary’s judgment role becomes more concentrated and more consequential.

Level 3: Invent. Creating entirely new business models, products, and internal functions with generative AI. Invent includes migrating outdated legacy systems to modern, cloud-based architecture. BCG recommends three-month pilot testing for Invent-level initiatives, running generative AI models across multiple proof-of-concept scenarios before committing to full deployment. This level is where parametric products, real-time pricing engines, and fully autonomous underwriting cells become feasible. Few carriers have reached Invent at scale, and BCG’s framework acknowledges this honestly rather than overpromising.

Quantifying the Gains: BCG’s Value Chain Projections

BCG projects $35 to $60 billion in potential operating cost reductions for the US insurance market from AI adoption at scale. The range reflects differences in carrier starting points, data maturity, and organizational readiness. The specific gains BCG quantifies across the P&C value chain are more granular than typical consulting estimates:

Value Chain Function Key Metric BCG Projection
Underwriting efficiency Complex-line productivity improvement Up to 36%
Underwriting quality Loss ratio improvement from unstructured data utilization Up to 3 percentage points
Quote turnaround Time reduction Up to 60%
Underwriter capacity freed Time redirected from data gathering to judgment ~20%
Customer service productivity Overall agent productivity gains Exceeding 30%
Claims cost reduction Operational cost cuts 30% to 50%
Claims processing speed End-to-end cycle acceleration Up to 50%
Simple claims resolution Real-time resolution capability Up to 70%
IT migration time Legacy-to-modern transition duration 50% reduction
IT migration cost Program cost reduction 30% reduction

The underwriting numbers deserve particular scrutiny from pricing actuaries. A 3-percentage-point loss ratio improvement from better utilization of unstructured data represents a meaningful shift in how risk selection interacts with pricing adequacy. If underwriters can access and process submission data that previously went unread (SOV attachments, loss run narratives, inspection photos), the information asymmetry between the carrier and the risk narrows. That narrowing should, in theory, reduce adverse selection and improve the accuracy of individual risk pricing relative to manual class rate application.

BCG describes the underwriter’s role evolution as a shift where “AI triages submissions, retrieves precedents, and drafts initial pricing, empowering underwriters to apply nuanced judgment and strengthen relationships.” The 30% to 40% reduction in active handling time that BCG projects does not eliminate the underwriter; it redirects time from document processing to AI output review. For actuaries building expense assumptions into rate indications, this distinction matters: the headcount reduction may be smaller than the productivity gain implies, because the remaining work requires higher-skill labor.

BCG’s Zero-Based Design vs. McKinsey’s Modular Agent Library

The most consequential divergence between the two frameworks is architectural. BCG advocates “zero-based design,” while McKinsey promotes a “modular agent library.” Both address the same problem (legacy system modernization), but they approach it from fundamentally different directions.

BCG’s zero-based design starts from the outcome a carrier wants and reinvents how to deliver it, rather than automating existing workflows. The principle is that real value comes from redesigning processes around the logic of new software, customizing only where customization adds measurable value. BCG explicitly warns against the common mistake of replicating legacy system configurations on modern platforms, which locks in decades of accumulated complexity without extracting modernization benefits. Combined with agentic AI deployment across every phase of modernization, BCG argues this approach delivers a program that is “financially more feasible, much shorter in duration, and considerably less risky.”

McKinsey’s modular agent library treats AI agents as “a library of atomic capabilities, each with clear inputs, acceptance criteria, and escalation paths to humans.” Rather than deploying monolithic solutions, McKinsey recommends building modular agents that improve control, make outputs auditable, enable reuse across discovery, data, testing, and cutover phases, and allow carriers to update specific components as models and tooling evolve without destabilizing the whole workflow. McKinsey frames modernization as a “coordinated portfolio of opportunities” rather than a single large-scale migration.

The practical difference for actuarial teams is significant. Under BCG’s zero-based approach, actuaries have the opportunity to advocate for implementing modern rating algorithms rather than recreating legacy rate table structures. If the migration is happening regardless, zero-based design creates the opening to modernize pricing logic, territory definitions, and factor structures that have accumulated complexity without actuarial justification. Under McKinsey’s modular approach, the emphasis is on preserving existing logic with high fidelity while using agents to accelerate the mechanical work of extraction, translation, and testing. The modular approach is more conservative; the zero-based approach is more transformative but carries higher execution risk.

Where Both Frameworks Converge: Testing and Reconciliation

Despite their architectural differences, BCG and McKinsey agree on a critical empirical finding: testing, reconciliation, and defect-cycle compression offer the greatest immediate productivity gains from agentic AI deployment. McKinsey quantifies this at 15% to 90% improvement, the widest and highest range across all modernization phases.

McKinsey breaks the productivity gains down by modernization phase:

Modernization Phase Productivity Improvement Range
Discovery, reverse engineering, product/rule understanding 20% to 50%
Target configuration acceleration 15% to 40%
Testing, reconciliation, defect cycle compression 15% to 90%
Cutover and operations readiness 10% to 40%

The convergence on testing and reconciliation reflects a shared observation about where modernization programs actually consume time and money. McKinsey frames the bottleneck precisely: in policy administration migrations, “the biggest bottlenecks are rarely typing code but rather the loops of discovery, mapping, testing, reconciliation, and cutover.” Each loop involves subject-matter experts manually reviewing legacy logic, documenting business rules, verifying data transformations, and reconciling outputs between old and new systems. When defects surface during testing, the loop resets.

BCG reinforces this with case evidence. A Central European insurer’s core system program ran for eight years before producing a write-off exceeding $500 million. A Southern European insurer’s claims platform program completed at 500% over budget. Both failures traced to the same root cause: the programs began without sufficient understanding of what the legacy system actually contained.

For actuaries, the testing and reconciliation phase is where pricing system continuity gets validated. Rating algorithms encoded in legacy systems must produce equivalent results on modern platforms during parallel-run periods. Any deviation between legacy and target system premiums creates rate adequacy risk. The agentic AI approach to testing, where agents automatically compare outputs and link discrepancies to root causes, directly serves the actuarial need for verified premium reconciliation at the policy level.

The 10-20-70 Formula: Why Technology Is Only 20% of the Challenge

BCG introduces a resource allocation formula that every actuarial leader evaluating AI investments should internalize: 10% algorithms, 20% technology and data, 70% people and process. BCG states explicitly that “human and organizational factors account for 70% of scaling challenges among insurers.”

This formula has direct implications for how carriers should budget and staff AI transformation programs. The technology spend that captures board attention and analyst coverage represents only 20% of the actual challenge. The remaining 70% involves change management, role redefinition, training, governance framework development, and organizational resistance. BCG defines three distinct human oversight roles that carriers must staff and train:

  1. Review-and-Approve: High-volume validation performed by trained reviewers who verify AI outputs against defined quality thresholds.
  2. Exception Handling: Low-volume edge cases that require senior domain expertise, including actuarial judgment on unusual risks or complex treaty structures.
  3. Quality Calibration: Ongoing feedback loops maintained by subject-matter experts who tune AI system performance based on observed outcomes.

The 10-20-70 split aligns with Capgemini’s May 2026 P&C research finding that 72% of carrier AI investments go to technology and infrastructure, while only 28% go to change management. Capgemini also found that 47% of employees given AI tools report unchanged workdays after 18 months of access. The carriers Capgemini identifies as “intelligence trailblazers” (the top 10% by AI maturity) are four times more likely to invest in change management beyond basic training. These trailblazers see 21% higher revenue growth and approximately 51% greater share price appreciation over three years.

For actuarial departments specifically, the 70% people-and-process challenge manifests in two areas. First, actuaries must learn to validate AI-generated outputs rather than producing those outputs manually. The skill set shifts from data gathering and calculation to output review, anomaly detection, and judgment application. Second, actuarial workflows must be redesigned to integrate AI outputs at the right points rather than bolting AI onto existing manual processes. A pricing actuary who receives an AI-generated rate indication still needs to validate assumptions, check regulatory constraints, and exercise professional judgment, but the sequence and time allocation of those steps changes fundamentally.

The Scaling Gap: From 38% to 7%

BCG reports that only 38% of P&C insurers are realizing AI value at scale. This figure sits between the more optimistic Microsoft survey finding (discussed in McKinsey’s investor implications report) that 22% of insurers plan agentic AI deployment by end of 2026, and the more pessimistic Sedgwick-derived finding that only 7% of insurers have reached full-scale AI deployment. The variation in these percentages reflects different definitions of “scale” and different survey populations, but the directional message is consistent: most carriers remain stuck between proof of concept and enterprise deployment.

BCG also projects that AI spending will triple as a share of revenue across the insurance industry in 2026. This spending acceleration, combined with the low scaling percentage, reinforces the J-curve pattern visible in Morgan Stanley’s carrier-level data: implementation costs front-load while productivity gains back-load, creating a temporary earnings drag before the ROI materializes.

What determines where a carrier lands in the 10% to 90% productivity range is not primarily the AI technology itself. BCG’s framework and McKinsey’s data converge on the same set of determinants: data quality and accessibility, the degree of legacy system documentation, organizational readiness for process redesign, governance maturity, and whether the carrier treats AI deployment as a technology project or a business transformation. Carriers with well-documented business rules, clean data pipelines, and executive sponsorship for process redesign consistently land toward the upper end of the range. Carriers that bolt AI onto undocumented legacy systems without organizational change land toward the lower end.

The Unit Economics Shift: Why Reusability Changes Everything

Both BCG and McKinsey identify a structural economic argument that distinguishes agentic AI modernization from prior technology waves. BCG states that “agentic capabilities fundamentally change the unit economics of modernization because once core agents are built and governed, their marginal cost of reuse sharply declines.” McKinsey uses nearly identical language: once agent capabilities are established, “the incremental cost of modernizing additional products and systems can fall quickly because the same agents, patterns, and context layers can be reused across waves and domains.”

This reusability argument has specific implications for multi-line P&C carriers. An agent built to extract business rules from a personal auto policy administration system can be adapted for homeowners, commercial auto, or commercial property with decreasing marginal effort. Each subsequent migration wave inherits context, validated patterns, and trained agents from prior deployments. McKinsey calls this the “modernization factory” concept; BCG describes it as declining marginal costs of reuse. The economic implication is the same: the first modernization wave is the most expensive per unit of value, and the cost curve flattens materially for subsequent waves.

For carriers considering the sequencing decision, this means the choice of which line of business to modernize first has strategic significance beyond that line’s standalone economics. The first-mover line generates the agent library, documentation patterns, and testing templates that reduce cost for every line that follows. BCG’s Deploy-Reshape-Invent framework suggests starting with the line that has the most structured data and the clearest business rules, not necessarily the line with the largest premium volume or the worst legacy system burden.

Actuarial Implications Across the Value Chain

The BCG and McKinsey frameworks have specific, traceable implications for actuarial work. These are not generic “AI will change everything” assertions; they map to concrete workflow changes that pricing, reserving, and modeling actuaries will encounter as their carriers progress through transformation phases.

Pricing and rate indication workflows. BCG projects that AI triages submissions, retrieves precedents, and drafts initial pricing. For pricing actuaries, this means the data preparation step that currently consumes 60% to 80% of project time (per multiple SOA workflow surveys) compresses significantly. The actuary’s role shifts from data gathering and cleaning to validating AI-prepared pricing exhibits, verifying that the correct rating factors are applied, and exercising judgment on accounts where the AI-generated price deviates from expected ranges. Under ASOP No. 29, the actuary remains responsible for the adequacy of the rate regardless of how the supporting analysis was produced.

Reserving and loss development. Claims cost reductions of 30% to 50% and processing speed increases of up to 50% do not change the ultimate loss, but they change how quickly and accurately that loss is recognized. Faster claims processing compresses the IBNR development tail for short-tail lines. AI-assisted claims triage that routes complex claims to specialized adjusters earlier in the process should reduce LAE and potentially reduce severity through earlier intervention. Reserving actuaries must anticipate these shifts in development patterns and adjust loss development factors accordingly. The transition period, where some claims are processed through the legacy workflow and others through the AI-enhanced workflow, creates a mixed-population reserving challenge analogous to what actuaries manage during operational changes like call center consolidation.

Model validation workloads. Under ASOP No. 56, actuaries must evaluate whether models remain appropriate when inputs change. A core system migration changes the input layer for every downstream model. Pricing models, reserving models, and capital models that received data from the legacy system must be revalidated against the new platform’s output format and content. Both consulting frameworks acknowledge this surge in validation work, and McKinsey’s emphasis on auditable outputs at each step provides the documentation trail that actuaries need for compliance. The practical implication is that actuarial departments should plan for a temporary 20% to 40% increase in model validation workload during and immediately after migration cutover periods.

Expense ratio assumptions in rate filings. Morgan Stanley projects $9.3 billion in AI-generated operating income for P&C insurers by 2030, driven primarily by expense ratio reductions of approximately 200 basis points. BCG’s framework provides the operational detail behind that projection. If carriers achieve even half of the claimed productivity gains in underwriting (36%) and claims (30% to 50%), the fixed expense ratio for these functions declines materially. Pricing actuaries building prospective expense loads into rate filings must decide when to recognize these savings: too early risks rate inadequacy if the gains lag projections; too late risks competitive disadvantage as faster-moving carriers file lower rates.

Why Two Consulting Firms Published Competing Frameworks in the Same Quarter

The timing of these publications is not coincidental. From tracking consulting firm publication cycles across insurance, the concentration of three major AI frameworks (BCG’s two reports and McKinsey’s modernization piece) within January to April 2026 reflects three converging forces.

First, client demand. Large P&C carriers entered 2026 budget season with AI as a board-level strategic priority for the first time. Travelers’ $1.5 billion annual technology budget, AIG’s multi-agent Palantir deployment, Chubb’s 85% automation target, and Progressive’s implicit ML pricing advantage all became public reference points in Q4 2025 and Q1 2026. Consulting firms publish frameworks to capture advisory revenue from carriers seeking to match or exceed these peer benchmarks.

Second, early implementation evidence. Both firms cite real carrier engagements, though without naming specific clients in most cases. BCG references a top global reinsurance provider where a generative AI-powered smart contracts tool freed 20% to 30% additional capacity for inspectors and drove a 10% to 15% increase in adoption of risk recommendations. McKinsey references carriers where agents accomplish “within days what would take a trained subject matter expert months or even years to complete” in legacy code translation. These are not purely theoretical projections; they reflect early-mover results that the firms are now packaging for broader market consumption.

Third, competitive positioning. BCG and McKinsey are competing for the same carrier advisory engagements. Publishing differentiated but complementary frameworks (BCG’s Deploy-Reshape-Invent maturity ladder versus McKinsey’s modernization factory) positions each firm as the thought leader for a specific transformation philosophy. Carriers that prefer a phased maturity approach will gravitate toward BCG; carriers that prefer an infrastructure-first factory approach will gravitate toward McKinsey. The strategic differentiation is deliberate, but the underlying data and conclusions reinforce each other.

What Practicing Actuaries Should Do Now

Map your carrier’s position on the Deploy-Reshape-Invent spectrum. Most carriers are in Deploy (Level 1) for some functions and have not started for others. Understanding where your organization sits helps you anticipate which actuarial workflows will change first. Claims automation typically leads, followed by underwriting assist, with pricing and reserving model transformation coming in later phases.

Engage in modernization planning before the testing phase. Both BCG and McKinsey emphasize that the business rules agents extract from legacy systems include pricing logic, rating algorithms, and factor tables that only actuaries can validate for correctness. If your carrier is evaluating or initiating a core system migration, actuarial teams should be at the table during discovery, not introduced at user acceptance testing. The discovery phase is when “underdocumented product logic and actuarial settings” (BCG’s phrasing) either get captured correctly or become the root cause of post-migration pricing errors.

Evaluate whether zero-based design creates pricing modernization opportunities. If a migration is happening regardless, BCG’s zero-based principle means actuaries can advocate for implementing modern rating algorithms, GLM-based pricing frameworks, or territory redefinitions rather than recreating COBOL rate tables in a new language. The migration window is the opportunity to modernize pricing logic that has accumulated complexity across decades of manual patches. Once the new system is configured, changing the pricing architecture becomes another multi-year project.

Build expense ratio transition scenarios for rate filings. The consulting firm consensus projects material expense ratio improvements from AI deployment. Pricing actuaries need credible scenarios for when those savings emerge in their specific carrier’s operations. A three-scenario approach (conservative: 25% of projected savings by 2028; base: 50% by 2028; aggressive: 75% by 2028) provides defensible rate filing assumptions without over-committing to projections that lack credible loss experience support.

Plan for the model validation surge. Every downstream model consuming data from a migrating system requires revalidation under ASOP No. 56. Build this workload into departmental planning 6 to 12 months before projected cutover dates. Identify which models are most sensitive to input format changes and prioritize their validation. The testing and reconciliation phase, where both consulting frameworks project the highest productivity gains from agentic AI, is exactly the phase where actuarial validation demands peak.

Sources

  1. BCG, “The AI-First Property and Casualty Insurer” (March 2026) - Deploy-Reshape-Invent framework, $35-60B US market impact, 10-20-70 resource allocation formula, value chain productivity projections.
  2. BCG, “Agentic AI Can Power Core Insurance IT Modernization” (January 2026) - Zero-based design principle, eight-phase modernization model, discovery phase automation, reusability economics.
  3. McKinsey, “Can Agentic AI (Finally) Modernize Core Technologies in Insurance?” (April 2026) - Modular agent library framework, 10-90% productivity range by migration phase, modernization factory concept.
  4. McKinsey, “AI in Insurance: Understanding the Implications for Investors” (February 2026) - $50-70B revenue opportunity, 7% at-scale deployment, 22% planning agentic AI by end of 2026, 90% automation by 2030 projection.
  5. Risk & Insurance, “Agentic AI Could Deliver Up to 90% Productivity Gains” (May 2026) - Industry coverage of the McKinsey productivity range and bottleneck analysis.
  6. Softtek, “Insurance 2026: Agentic AI, Composable Core, and Governance” (January 2026) - Composable core architecture, EU AI Act August 2026 compliance timeline, Gartner 40% enterprise AI agent prediction.
  7. Capgemini, “The Moment of AI Truth for P&C Insurance” (May 2026) - 72% tech vs. 28% change management spend split, 47% unchanged workdays, intelligence trailblazer benchmarks.
  8. Carrier Management / Morgan Stanley, “Expense Ratio Analysis: AI and Remote Work Drive Better P/C Insurer Results” (January 2026) - 200 basis point expense ratio improvement, $9.3B operating income by 2030, carrier-level automation rates.