From tracking vendor moves across Verisk, Guidewire, and EXL for the past two years, the MCP announcement on May 5 is structurally different from anything else in the insurance AI vendor space. Every prior vendor-LLM integration followed the same pattern: wrap an LLM around the vendor’s own interface and call it a copilot. Verisk has done the opposite. By building Model Context Protocol connectors for Anthropic’s Claude, Verisk is pushing its ISO Indications and XactRestore analytics out of its own platform and into a foundation model that underwriters and claims professionals already use for other work. The difference is not cosmetic. It represents a shift from analytics-as-destination to analytics-as-layer, and it carries implications for data governance, vendor competition, and the actuarial workflows that depend on both.
The May 5 announcement also landed on the same day Anthropic launched 10 new finance agent templates and eight additional MCP data connectors for financial services, including partnerships with Dun & Bradstreet, Moody’s, and SS&C Intralinks. Verisk was the only insurance-specific connector in the batch. For carriers evaluating their AI vendor strategy, the timing signals that Anthropic is moving aggressively into financial services, and that Verisk has positioned itself as the insurance bridge to that ecosystem.
What Model Context Protocol Is and Why It Matters for Insurance
Model Context Protocol is an open standard originally developed by Anthropic in November 2024 to solve a specific problem: how do you give a large language model access to external data sources and tools without hard-coding each integration? MCP uses JSON-RPC 2.0 as its transport layer, inspired by the Language Server Protocol that standardized how code editors connect to programming language tools. The architecture defines three roles: hosts (the LLM application), clients (connectors within the host), and servers (services that provide data, tools, or context to the model).
The protocol has gained rapid adoption. Within 18 months of its introduction, the MCP ecosystem has grown to over 10,000 active public servers, with adoption from ChatGPT, Cursor, Gemini, Microsoft Copilot, and Visual Studio Code. SDK downloads surpass 97 million per month across Python and TypeScript implementations. In December 2025, Anthropic donated MCP to the Linux Foundation’s Agentic AI Foundation, co-founded by Anthropic, Block, and OpenAI, with supporting organizations including Google, Microsoft, AWS, and Cloudflare. An MCP Developer Summit in New York in April 2026 drew approximately 1,200 attendees.
For insurance, MCP matters because it standardizes how LLMs consume vendor data. Before MCP, each vendor-to-LLM integration was bespoke: custom APIs, proprietary middleware, individual authentication layers. MCP replaces that with a common connector architecture. A carrier using Claude can now query Verisk data through the same interface used to access Moody’s credit ratings or S&P Capital IQ financial data. The implication for actuarial teams is that the data environment inside an LLM is becoming richer and more standardized, which changes how underwriting analysis, trend monitoring, and even rate filing research get done.
The protocol also matters for portability. Because MCP is an open specification now governed by a foundation rather than a single company, connectors built for Claude could theoretically work with other MCP-compatible hosts. Verisk explicitly described its approach as “platform-agnostic” and “model-agnostic” in the press release, suggesting the company views these connectors as the beginning of a multi-platform distribution strategy rather than an exclusive Anthropic partnership.
ISO Indications Inside Claude: What Underwriters and Actuaries Get
The first connector, Verisk Underwriting Intelligence, brings ISO Indications data into Claude’s conversational interface. ISO Indications is the industry-standard source for advisory loss costs, experience data, and regulatory filing signals across 31 lines of business. It draws on Verisk’s statistical database of 34.5 billion records, including 8.2 billion commercial lines records and 21.5 billion personal lines records, built over more than 50 years of market presence.
In the current workflow, an underwriter or pricing actuary accesses ISO Indications through Verisk’s proprietary web interface. Pulling loss cost trend data for a specific state and line typically involves navigating to the right product module, selecting filters, downloading reports, and then manually incorporating the data into a pricing model or rate filing. Verisk estimates that the MCP connector can save “hundreds of hours per carrier per year” by allowing users to query this data through natural language within Claude.
The practical use case looks something like this: a pricing actuary working on a commercial auto rate filing in Texas could ask Claude for the most recent ISO loss cost indications for that state and line, compare them against the carrier’s own experience data already loaded in the conversation, and identify divergences that need investigation. Instead of switching between Verisk’s portal, an Excel workbook, and a filing document, the actuary works within a single conversational interface where the LLM can synthesize data from multiple sources simultaneously.
Patterns we’ve observed in prior vendor AI integrations suggest that the time savings are real but unevenly distributed. Actuaries who already work efficiently with the Verisk portal may see modest gains. The larger productivity improvements typically accrue to less experienced staff who spend significant time navigating unfamiliar interfaces, and to cross-functional use cases where underwriters need quick access to actuarial data they would otherwise request from the pricing team. Reducing that handoff friction is where embedded analytics generates the most measurable workflow compression.
XactRestore: Claims Estimation Through Conversation
The second connector targets an entirely different user base. XactRestore is Verisk’s restoration job management and estimating platform used by contractors, adjusters, and claims professionals to scope and price water damage mitigation, remediation, and restoration projects. It includes proprietary pricing databases, step-by-step field documentation guides, and mobile applications for iOS and Android.
The MCP connector allows restoration professionals to convert natural-language descriptions of damage and scope into structured estimating actions powered by Verisk pricing data. Rather than manually entering line items in XactRestore’s estimating interface, a field adjuster could describe the damage observed and receive a structured estimate grounded in Verisk’s pricing intelligence.
Verisk claims time savings of 30 minutes to two hours per restoration estimate for experienced contractors. For shops running five to 10 estimates per week per estimator, that translates to 2.5 to 20 hours of estimator time recovered per week. The variance is significant because it depends on job complexity: a straightforward water mitigation claim might see 30 minutes of savings, while a complex remediation job with multiple trades and phases could see the full two hours.
For actuaries, the claims-side connector matters for a different reason than the underwriting one. If XactRestore-through-Claude produces more consistent and complete estimates, it could reduce the variance in claims severity data that feeds into reserving models. More standardized estimation also reduces the likelihood of supplemental payments that inflate loss development factors. This continues a trend visible in recent industry data: the Datos Insights ILTF 2026 survey found that claims AI adoption reached 50% of carriers in production, up from approximately 35% a year earlier, with document processing and estimation among the most common production use cases.
The Vendor Platform War: Anthropic vs. OpenAI in Insurance
The Verisk-Anthropic integration does not exist in a vacuum. It is one move in an accelerating competition between foundation model providers for insurance industry share, and the current landscape heavily favors one competitor.
The IA Capital Group survey published on May 6, 2026, found that OpenAI appears in approximately 9 out of 10 insurance carrier technology stacks. Google Gemini is absent from carrier deployments entirely. The survey covered 36 senior carrier technology leaders and documented a sharp shift from pilot to production: the share of carriers with AI in production grew from 37% to 61% in a single year.
OpenAI’s insurance footprint is substantial and growing. Travelers launched an AI Claim Assistant in February 2026 built on OpenAI’s models and Realtime API, handling personal auto damage claims with agentic voice capabilities. OpenAI has 12 insurance AI applications in its approval pipeline with launches expected across North America and Europe. On the same day as the Verisk-Anthropic announcement, OpenAI launched a competing financial services partnership with PwC.
Anthropic’s insurance position is smaller but strategically placed. Its highest-profile insurance partnerships include Allianz, which in January 2026 announced a global partnership making Claude available to all 156,000 employees across 70 countries, and AIG, which uses Claude through Palantir Foundry to power its multi-agent underwriting system. Travelers also deployed Anthropic AI assistants to 10,000 engineers and data scientists, creating a split where the same carrier uses OpenAI for customer-facing claims and Anthropic for internal engineering productivity.
Enterprise LLM market share data from Menlo Ventures tells a more nuanced story than the insurance-specific numbers. As of mid-2025, Anthropic held 32% of enterprise LLM spend (up from 12% in 2023), while OpenAI had fallen to 25% (down from 50%). In code generation specifically, Claude held 42% market share to OpenAI’s 21%. These enterprise-wide numbers suggest that the insurance industry’s heavy OpenAI tilt is a lagging indicator rather than a structural lock-in.
The Verisk connector announcement matters in this competitive context because it gives Anthropic something OpenAI currently lacks in insurance: direct integration with the industry’s dominant data provider. OpenAI can offer general-purpose capabilities and carrier-specific partnerships, but it cannot offer conversational access to ISO loss costs, and that kind of vendor concentration creates specific risks. For carriers already using Claude or evaluating a multi-model strategy, the Verisk MCP connector provides a tangible capability advantage that was not available 30 days ago.
Data Governance When Regulatory-Grade Analytics Flow Through LLMs
The governance implications of the Verisk-Anthropic integration are substantial and insufficiently discussed in the initial trade press coverage. ISO Indications data is not general-purpose information. It is regulatory-grade actuarial data used in rate filings submitted to state insurance departments. When that data flows through a third-party LLM, it raises questions about data integrity, auditability, and regulatory acceptability that carriers and their appointed actuaries will need to resolve.
Verisk addressed this directly in the press release, with CEO Lee Shavel stating: “Trust is the foundation of insurance, and that doesn’t change as new technologies emerge. Our role is to bring AI into insurance in a way that reflects the realities of the industry, where data must be authoritative, decisions must be explainable, and accountability remains with people.” The company described its approach as governed by its “established data governance framework” with humans remaining “at the center of every decision.”
Mike Ram, Anthropic’s Head of Insurance, reinforced the governance framing: “Insurance is a highly regulated, high-stakes industry, and Verisk has long been a leader for how trusted data and analytics are applied responsibly. By pairing Claude with Verisk’s governed analytics and established controls, this collaboration shows how generative AI can enhance professional decision-making without compromising the rigor and accountability the industry demands.”
The governance challenge is real, though, and extends beyond what either company’s messaging addresses. Grant Thornton’s 2026 AI Impact Survey found that 44% of insurance leaders cite governance and compliance challenges as a primary cause of AI project failure or underperformance, and only 24% said they were very confident they could pass an independent AI governance review within 90 days. When regulatory-grade data flows through an LLM, the audit trail becomes more complex. A pricing actuary citing ISO Indications in a rate filing needs to demonstrate that the data came from an authoritative source and was not altered or hallucinated by the model.
MCP’s architecture partially mitigates this concern. Because the protocol defines a structured client-server relationship, the data returned by the Verisk connector is sourced directly from Verisk’s systems, not generated by the LLM. The model uses the data in its response, but the data itself retains its provenance. Whether state regulators will accept this distinction in practice is an open question. The NAIC’s 12-state AI Evaluation Tool pilot and the evolving ASOP No. 56 guidance on model governance both suggest that carriers will need to document how AI-mediated analytics are validated before they can be used in regulatory submissions.
Verisk’s own Q1 2026 earnings call provided evidence that governance friction is already affecting AI vendor sales cycles. CEO Lee Shavel noted that the company is “having to spend some more time working our way through these issues” in contract negotiations, referring specifically to AI governance and compliance requirements that extend procurement timelines. The MCP connector may actually ease this friction by embedding Verisk’s governance framework directly into the data delivery mechanism, rather than requiring carriers to build separate governance layers around each AI integration.
Build vs. Buy: Embedded Analytics or Custom Data Pipelines
The Verisk MCP connector sharpens the build-versus-buy question that every carrier AI team is grappling with in 2026. The fundamental choice is whether to build custom data pipelines that feed internal models, or to consume analytics through vendor-provided connectors embedded in commercial LLMs.
The case for embedded analytics is straightforward. Verisk has deployed approximately 40 agentic and generative AI solutions across its product suite, grounded in proprietary data and domain expertise. Building a comparable data infrastructure in-house would require access to the same 34.5 billion statistical records that Verisk has accumulated over five decades. No individual carrier can replicate that data asset, which means the build option for ISO-equivalent analytics does not meaningfully exist. What carriers can build is the integration layer: custom prompts, workflows, and validation frameworks that use Verisk data within their own LLM environments.
The MCP connector lowers the integration cost substantially. Before MCP, a carrier wanting to use Verisk data inside an LLM would need to build a custom API integration, handle authentication and rate limiting, manage data formatting, and validate responses. With a standardized MCP connector, much of that middleware collapses into a configuration step. This shifts carrier AI budgets from integration engineering toward workflow design and governance, where the actuarial value-add is highest.
The Datos Insights ILTF 2026 survey provides useful context: 70% of carriers spent under $500,000 on AI projects in the past year, and only 8% believe they are currently ahead of peers. At those investment levels, building custom alternatives to vendor-provided connectors is rarely justified. The MCP connector allows carriers to deploy meaningful AI capabilities at modest incremental cost, particularly for underwriting intelligence use cases where the analytical value is already embedded in the data.
The build path remains relevant for carriers with differentiated data assets. A large personal lines carrier with proprietary telematics data, for example, gains more from building custom LLM integrations around its own data than from consuming ISO averages through a Verisk connector. The MCP standard benefits these carriers too, because the same protocol that delivers Verisk data can also deliver proprietary data through custom-built MCP servers. The protocol is bidirectional in this sense: it standardizes how any data source connects to any MCP-compatible host.
The broader vendor ecosystem is also adapting. Duck Creek’s five-layer agentic AI platform, Guidewire’s PricingCenter, and EXL’s insurance LLM all represent competing approaches to embedding analytics in carrier workflows. What distinguishes the Verisk-Anthropic approach is the directness of the integration: no intermediary vendor platform, no proprietary middleware, just a standardized connector from the data source to the foundation model. Whether that directness becomes the industry standard or remains one architecture among several will depend on how quickly other vendors adopt MCP and how carriers evaluate the trade-offs between vendor-provided and self-built connector strategies.
Verisk’s AI Product Trajectory: From Modules to Middleware
The MCP connector represents a strategic evolution in Verisk’s AI product approach. In Q1 2026, the company shipped seven new client-facing AI modules and set a target of 25 total releases for the full year. These modules span underwriting and claims, with the augmented underwriting product generating over 20 follow-up meetings from carrier prospects. Aerial imagery revenue grew more than 30% over two years, and digital media forensics added its sixth top-10 carrier customer.
But modules are products delivered through Verisk’s own platform. The MCP connectors are something different: they are middleware that distributes Verisk’s analytics through someone else’s platform. This is a notable strategic shift for a company whose business model has historically depended on being the destination that insurance professionals visit to access data.
Verisk’s Q1 2026 financials underpin the investment capacity for this dual strategy. Revenue reached $783 million (up 4% year over year), with subscription revenue growing 7% on an organic constant currency basis and representing 84% of total revenue. Adjusted EBITDA came in at $438 million with a 55.9% margin. Full-year guidance of $3.19 billion to $3.24 billion in revenue with adjusted EBITDA margins of 56% to 56.5% implies sustained investment in both module development and connector distribution. The company’s excess and surplus lines data contributions now exceed $15 billion in premium, and it added four new carriers to its core lines contributory data program in Q1 alone.
The dual approach, modules for depth and connectors for reach, mirrors the distribution strategies of enterprise software companies that have navigated similar platform transitions. Verisk’s challenge is maintaining data governance and pricing power when its analytics are consumed through a third-party interface rather than its own. If a carrier accesses ISO Indications through Claude rather than Verisk’s portal, Verisk still controls the data and the connector, but it loses some control over the user experience and the workflow context in which its data is consumed. How that affects pricing leverage on multi-year contracts, which Shavel noted average “approximately between 4 and 5 years,” will be a key variable to monitor in coming earnings cycles.
What This Signals for the Next Phase of Insurance AI
The Verisk-Anthropic integration is the first concrete example of a pattern that will likely define insurance AI through 2027: the collapse of standalone vendor tools into analytics layers inside foundation models. Rather than logging into Verisk for loss costs, Guidewire for policy data, AM Best for ratings, and a carrier’s own systems for experience data, the emerging model puts all of these data sources behind MCP connectors that a single LLM can orchestrate.
This has structural implications for how actuarial work gets done. Pricing actuaries who currently maintain separate workflows for data extraction, trend analysis, and filing documentation could consolidate much of that work into a conversational interface where the LLM handles data retrieval and the actuary focuses on judgment, validation, and regulatory interpretation. Reserve actuaries could query loss development patterns from carrier systems and benchmark them against ISO industry data in the same session. Reinsurance pricing teams could pull treaty terms, cat model outputs, and market analytics into a single analytical environment.
None of this is possible today with the Verisk connector alone. ISO Indications and XactRestore are two data sources among many that actuaries use daily. But MCP’s standardized architecture means each additional connector adds multiplicative rather than additive value, because the LLM can synthesize data across all connected sources simultaneously. Anthropic’s Claude directory already lists over 75 connectors, and the open specification ensures that carriers, vendors, and third-party developers can build additional connectors without waiting for Anthropic or Verisk to do so.
Why This Matters for Actuaries
Three specific implications stand out for actuarial professionals evaluating the Verisk MCP announcement.
Workflow consolidation is beginning at the data layer, not the application layer. Previous waves of insurance technology promised to consolidate actuarial workflows by replacing legacy applications with modern platforms. Those efforts largely failed because they required carriers to rip and replace core systems. The MCP approach avoids that problem by leaving existing systems in place and connecting them to a common analytical layer. Actuaries who understand how MCP connectors work, and what data governance requirements they carry, will be better positioned to influence their carriers’ AI vendor strategy.
The governance bar for AI-mediated analytics is rising faster than the tools to meet it. With 44% of insurance leaders citing governance as a primary AI failure cause and 76% unable to demonstrate adequate governance on demand (Grant Thornton), the gap between what carriers want to deploy and what they can defensibly govern remains wide. Actuaries, particularly those with ASOP No. 56 expertise, are the natural bridge between data science teams building these integrations and the regulatory and audit functions that must validate them. The Verisk connector adds a new category of AI-mediated analytics that will need governance frameworks, and actuarial teams should be defining those frameworks now rather than after deployment.
The vendor platform decision has a 3-to-5-year tail. Verisk’s multi-year contract structure means carriers choosing to integrate Verisk data through Claude today are making a commitment that extends well into 2030. If the MCP ecosystem evolves as rapidly as the first 18 months suggest, the carriers that establish standardized connector architectures now will have a structural advantage in adding new data sources and analytical capabilities as they become available. Actuaries involved in AI strategy discussions should advocate for open-standard approaches like MCP over proprietary integrations that lock the carrier into a single vendor’s ecosystem.
This continues a trend we’ve been tracking across the Verisk product portfolio: the company is shifting from selling analytics products to selling analytics infrastructure. The MCP connectors are the clearest expression of that shift yet, and the actuarial profession will be the primary user group that determines whether it delivers on its promise.
Sources
- Verisk Brings Its Trusted Analytics and Generative AI Capabilities Directly into Anthropic’s Claude, GlobeNewswire (May 5, 2026) - Press release detailing the two MCP connectors for ISO Indications and XactRestore, including executive quotes from Lee Shavel and Mike Ram.
- Verisk Newsroom: MCP Connectors Announcement (May 5, 2026) - Company newsroom version with additional detail on Verisk’s AI deployment scale and data governance framework.
- Anthropic Finance Agents and MCP Connectors Announcement (May 5, 2026) - Ten new agent templates and eight data connectors for financial services, including Microsoft 365 integration and partner ecosystem details.
- OpenAI Dominates AI Stacks as Insurance Moves from Pilot to Production, The Insurer (May 6, 2026) - IA Capital Group survey of 36 carrier technology leaders showing OpenAI in 90% of stacks and AI production growing from 37% to 61%.
- Datos Insights ILTF 2026: Insurance Leaders Gathered in Boston (April 2026) - Survey of 100+ carrier executives showing AI production rates, spending levels, and document processing as the dominant production use case.
- Grant Thornton 2026 AI Impact Survey: Insurance (March 2026) - Governance readiness data showing 44% cite governance as primary AI failure cause, only 24% confident in passing independent review within 90 days.
- Verisk (VRSK) Q1 2026 Earnings Call Transcript, The Motley Fool (April 29, 2026) - Full transcript including AI governance contracting friction commentary, module release data, and augmented underwriting pipeline metrics.
- Model Context Protocol Specification (November 2025 revision) - Technical specification for MCP including JSON-RPC 2.0 transport, host/client/server architecture, and capability negotiation.
- Enterprise LLM Spend Reaches $8.4B, Menlo Ventures (2025) - Market share data showing Anthropic at 32% enterprise spend versus OpenAI at 25%, with code generation split at 42% vs. 21%.
- Allianz and Anthropic Forge Global Partnership (January 9, 2026) - Allianz’s global deployment of Claude across 156,000 employees in 70 countries, including claims automation and MCP-based data integration.
- Travelers Launches Agentic AI Claim Assistant Developed with OpenAI (February 18, 2026) - Personal auto damage claims voice assistant built on OpenAI Realtime API.
- Verisk ISO Forms, Rules, and Loss Costs Product Page - Background data on 34.5 billion statistical records, 31 lines of business, and regulatory filing infrastructure.