From analyzing early cyber insurance pricing methodologies a decade ago, the parallels to AI liability are striking: both began with zero credible loss data and leaned heavily on scenario-based actuarial judgment before claims experience accumulated. On May 5, 2026, Y Combinator-backed Corgi launched a dedicated AI liability insurance product covering biased algorithms, hallucination-driven losses, training data misuse, adversarial attacks, and autonomous system failures. One day later, the company closed a $160 million Series B led by TCV at a $1.3 billion valuation, making it Y Combinator's latest unicorn. The speed of that capital deployment signals that investors see AI model risk as a balance sheet problem large enough to sustain a standalone insurance line. Whether the actuarial math supports a standalone line at this scale is the question that will determine whether AI liability insurance follows the cyber insurance growth trajectory or something less favorable.

This article dissects three dimensions of that question. First, the product architecture: what Corgi actually covers, how it layers onto existing technology E&O, and where it fits in the emerging competitive landscape alongside Munich Re, Armilla, and the Verisk ISO exclusion dynamic. Second, the actuarial pricing challenge: how carriers price a peril class with no credible loss triangles, which exposure bases are being tested, and what the cyber insurance origin story reveals about pricing cycles ahead. Third, the demand catalyst: the EU AI Act's August 2, 2026 high-risk compliance deadline for life and health insurance AI systems, the NAIC's evolving evaluation framework, and the regulatory pressure that could accelerate adoption far faster than organic market forces alone.

Corgi's Product Architecture: Full-Stack Carrier With a Modular AI Layer

Corgi is not a managing general agent distributing through a fronting carrier. It is a licensed insurance carrier that received regulatory approval in July 2025 and writes policies on its own paper. That distinction matters for AI liability because it means Corgi controls underwriting authority, claims adjudication, and reserve methodology without requiring approval from a capacity provider. When the loss development on AI liability claims inevitably surprises in one direction, Corgi's actuarial team sets the reserve adjustments directly rather than negotiating with a fronting partner.

Founded in 2024 by CEO Nico Laqua, previously the founder of gaming publisher Basket with over 200 million monthly active users, and COO Emily Yuan, a former product manager at OpenAI, the company participated in Y Combinator's Summer 2024 batch. The founding team's combination of insurance technology experience and AI product development background positioned the company to build underwriting infrastructure that treats AI risk as a native exposure class rather than an afterthought endorsement.

What the AI Liability Product Covers

Corgi's AI liability coverage operates as a modular add-on to its existing Technology Errors and Omissions (Tech E&O) policy. Rather than writing a standalone AI liability form, the company uses affirmative language within the E&O structure that explicitly addresses AI-driven perils. The product covers six risk categories:

  • Model performance and hallucination: Legal defense and damages when an AI model produces inaccurate or fabricated outputs that cause financial harm to a third party. This is the highest-frequency exposure category, given that courts have already imposed monetary sanctions exceeding $10,000 in at least five cases involving hallucinated legal citations, with over 125 filings containing fabricated case references identified in early 2025 alone.
  • Algorithmic bias: Claims arising from discriminatory outcomes in hiring, lending, healthcare, or insurance underwriting decisions driven by AI systems. The regulatory surface area for bias claims is expanding rapidly, with over 35 state bar associations and multiple federal courts now mandating disclosure of AI use.
  • Training data disputes: Intellectual property claims related to copyrighted or proprietary material used in model training. The loss severity benchmark here is substantial: Anthropic settled copyright claims for $1.5 billion, and Universal Music filed a $3.1 billion lawsuit in January 2026.
  • Adversarial attacks and model theft: Cyber-adjacent coverage for prompt injection, data poisoning, and exfiltration of model weights or training datasets.
  • Synthetic media liability: Claims from deepfake-generated content, voice cloning, or unauthorized digital identity replication.
  • Autonomous system failures: Liability when agentic AI systems operating with limited human oversight cause damage through cascading tool-calling errors or unauthorized actions.

The modular structure is significant from a pricing standpoint. Customers can adjust coverage through a self-service dashboard, selecting which AI risk categories to include and at what limits. This creates a rating-plan design challenge: the actuary must price each module independently while accounting for the correlation between modules. A company deploying customer-facing agentic AI is simultaneously exposed to hallucination claims, bias claims, and autonomous system failures, and those exposures are not independent.

The $268 Million Funding Context

The $160 million Series B brings Corgi's total funding to over $268 million. The round was led by TCV, with participation from 18 additional investors including Kindred Ventures, Oliver Jung, Leblon Capital, Repeat VC, and Alumni Ventures. The valuation doubled from $630 million at the Series A to $1.3 billion in roughly four months. Corgi achieved $40 million in annual recurring revenue within its first year of operations after receiving its carrier license, according to Glitchwire's reporting.

The capital deployment plan extends beyond AI liability. Corgi began with property management insurance and is expanding into trucking, payroll, and small business coverage. The AI-native underwriting platform generates quotes in under ten minutes with same-day binding, compared to the two-to-four-week timelines common at traditional carriers. That speed advantage comes from analyzing thousands of data points per submission, a workflow design that works precisely because the company built its technology stack after the availability of large language models rather than retrofitting legacy systems.

Competitive Landscape: From ISO Exclusions to Affirmative Coverage

Corgi enters a market that is actively bifurcating. On one side, traditional carriers are carving AI risk out of their commercial general liability books. Verisk's ISO generative AI exclusion endorsements took effect January 1, 2026, and have been adopted at an unusually fast pace, with most state approvals clearing in thirty to sixty days. On the other side, a growing set of specialty markets are writing affirmative AI coverage with distinct product architectures.

From tracking the Verisk exclusion cadence and the affirmative coverage response this year, the market now has four distinct product models competing for AI liability premium:

Product Model Representative Carrier Coverage Trigger Pricing Approach
Performance guarantee Munich Re insureAI Contractual performance shortfall Surety/warranty pricing, model-audit-based
Cyber-adjacent AI add-on Coalition AI-specific tort liability AI load factor on cyber base rate
Standalone AI warranty Armilla (Lloyd's/Chaucer) Performance below benchmarks Independent audit scoring, up to $25M limits
Modular Tech E&O integration Corgi Third-party AI failure claims Module-specific rating with correlation adjustment

Corgi's modular E&O integration approach is closest to the Vouch model, which also embeds AI coverage into tech E&O rather than writing it standalone. The key difference is scale: Corgi is now capitalized at a level that allows it to write meaningfully larger limits and absorb adverse development on its own balance sheet, while Vouch and most MGA competitors depend on reinsurance capacity that may tighten if early AI liability loss experience is worse than expected.

The competitive dynamic is also shaped by what legacy carriers are choosing not to write. An EY survey found that 64% of companies with annual revenue above $1 billion have already lost more than $1 million to AI failures. Yet traditional carriers continue retreating from AI exposure through exclusions rather than pricing it affirmatively. That retreat creates a coverage vacuum that Corgi and its competitors are filling, but it also concentrates AI liability risk in a small number of carriers and MGAs with limited loss history on which to calibrate reserves.

The Actuarial Pricing Problem: Building Rating Plans on Scenario Judgment

The core challenge for any actuary pricing AI liability in 2026 is the absence of credible loss data. Generative AI in its current commercial form is roughly three years old. Enterprise deployment is younger. The claim reporting cycle has not produced enough development to construct reliable paid or incurred loss triangles. This is not a data quality problem that better collection will solve in the near term; it is a structural absence that forces the pricing actuary to rely on a combination of surrogate data, expert elicitation, and scenario modeling.

What Cyber Insurance Taught Us About Pricing the Unknown

The closest actuarial precedent is the early cyber insurance market. AIG wrote the first internet security liability policy in 1997, and for the next decade, carriers priced cyber risk with essentially no actuarial loss data. The American Academy of Actuaries has documented how early cyber pricing relied on competitor rate benchmarking, industry surveys such as the Computer Security Institute Crime and Security Survey, and carrier judgment rather than experience-rated credibility models.

The cyber insurance trajectory offers both encouragement and caution for AI liability pricing:

Dimension Early Cyber Insurance (1997-2010) AI Liability Insurance (2024-2026)
Loss data at launch Zero credible triangles Zero credible triangles
Initial pricing method Competitor benchmarking, judgment Scenario modeling, model audits, judgment
Early premium volume Under $1B through mid-2000s Emerging; standalone market nascent
Current market size $15.3B globally (2024) Too early to measure separately
Initial underpricing Significant; correction in 2020-2022 Expected based on precedent
Regulatory catalyst State breach notification laws (2003+) EU AI Act (August 2026), NAIC evaluation tool
Key loss severity driver Ransomware, data breach notification IP litigation, regulatory defense costs

The cyber market grew to $15.3 billion in global premiums by 2024, but it took nearly two decades and a painful repricing cycle. Between 2020 and 2022, cyber insurance premiums spiked by over 30% annually as ransomware losses overwhelmed initial rate assumptions. The Federal Reserve Bank of Chicago identified the absence of standardized actuarial tables and scoring systems as a persistent structural challenge. AI liability insurance faces the same structural gap today, with the added complexity that AI failure modes are less well-defined than data breach or ransomware events.

Exposure Bases Under Development

Traditional casualty exposure bases like payroll, revenue, and unit count translate poorly to AI risk. A $10 million revenue company running all customer interactions through an agentic AI chatbot has a fundamentally different exposure profile than a same-revenue company using AI only for internal document summarization. The emerging AI liability market is testing several alternative exposure bases:

  • Inference volume: Total model calls or tokens processed during the policy period. Scales naturally with AI adoption but requires metering infrastructure that many insureds lack.
  • Deployment surface: Customer-facing versus employee-facing versus internal-only use. The ratio between these surfaces drives expected claim frequency from third-party losses.
  • Model capability tier: Rating plans are beginning to differentiate by foundation model, recognizing that larger, more capable models produce more confident and potentially more consequential outputs. Public benchmarks such as Vectara's Hallucination Leaderboard and Stanford HELM provide loss-frequency proxies.
  • Regulated industry factor: Healthcare, financial services, and legal deployments carry incremental exposure from regulated advice liability. Most rating plans in development include an explicit industry multiplier.
  • Governance attestation: Coalition's approach of scaling the AI load factor based on the insured's governance maturity, including whether the insured maintains a model inventory, conducts adversarial testing, and enforces human review for customer-facing outputs.

The rating-plan design problem is compounded by correlation risk. An insured deploying autonomous AI agents in a regulated industry faces simultaneous exposure across hallucination, bias, and autonomous system failure categories. Pricing each module independently and summing understates the aggregate exposure. A reliability analysis cited in Glitchwire demonstrated this compounding effect: an AI agent with 85% per-step reliability achieves only 20% end-to-end success across a ten-step workflow, meaning the failure probability compounds far faster than linear exposure models predict.

Reserve Methodology: Analogy-Based and Scenario-Driven

With no AI-specific loss triangles, reserve actuaries are adapting two approaches from adjacent lines:

Analogy-based reserving maps AI liability onto the development patterns of the most similar existing lines. Cyber liability is the primary analogue for adversarial attacks and data-related claims. Professional liability (particularly medical malpractice and legal malpractice) provides development patterns for hallucination and regulated advice claims, where the reporting lag between the AI-generated error and the discovery of harm can extend several years. IP litigation development patterns inform reserves for training data disputes, where cases like the Universal Music $3.1 billion suit suggest severity distributions with extremely heavy tails.

Scenario-based reserving constructs explicit loss scenarios with probability-weighted outcomes. Five scenarios that actuaries pricing AI liability should be stress-testing include:

  1. Mass hallucination event: A foundation model update introduces a systematic hallucination pattern affecting thousands of enterprise customers simultaneously, generating correlated claims across the portfolio.
  2. Regulatory cascade: An EU AI Act enforcement action against one insured triggers copycat investigations by multiple state regulators, inflating defense costs across the book.
  3. Training data class action: A landmark IP ruling establishes that fine-tuning on copyrighted data constitutes infringement, creating retroactive exposure for every insured that fine-tuned models on proprietary datasets.
  4. Autonomous agent failure: An agentic AI system executing multi-step workflows causes financial harm through unauthorized transactions, with liability contested between the AI vendor, the deploying enterprise, and the insurer.
  5. Bias amplification at scale: An AI system used in insurance underwriting or lending produces discriminatory outcomes across millions of decisions before detection, creating class-action exposure with severity in the hundreds of millions.

Each of these scenarios has plausible loss severity ranging from tens of millions to several billion dollars. The actuarial challenge is assigning meaningful probability weights when neither frequency nor severity has been calibrated against actual claims experience. This is where the ASOP No. 56 framework for model governance becomes directly relevant: the scenario models themselves are actuarial models that require documentation, validation, and sensitivity testing under the standard.

The EU AI Act Demand Catalyst

The organic market for AI liability insurance would likely grow slowly without regulatory pressure. The EU AI Act changes that calculus substantially. Under Annex III of the Act, AI systems used for risk assessment and pricing in life and health insurance are explicitly classified as high-risk. This classification triggers seven compliance obligations under Articles 9 through 15, including mandatory risk management systems, data governance requirements, technical documentation, human oversight provisions, and conformity assessments.

The compliance deadline was originally August 2, 2026. The European Commission proposed deferring it to December 2027 through the Digital Omnibus on AI, but the second political trilogue failed on April 28, 2026, leaving the August 2, 2026 deadline legally in force. A delay remains possible but is no longer certain, and organizations that deferred compliance preparations are now scrambling.

The implications for AI liability insurance demand are direct:

  • Compliance cost insurance: Companies deploying high-risk AI systems need coverage for regulatory defense costs if their conformity assessments are challenged. Defense costs for investigating a single regulatory inquiry under emerging AI laws often exceed the fine amounts themselves.
  • Bias testing mandate: Article 10(5) creates a special exception allowing processors to use special category data for bias testing, but the testing itself creates a documented record that plaintiffs' attorneys can subpoena if the results show disparate impact.
  • Cross-border complexity: U.S. insurers and reinsurers writing coverage for companies with EU operations face dual-jurisdiction compliance requirements. The emerging compliance actuary role sits at the intersection of actuarial reserving and regulatory attestation.

The NAIC is building a parallel framework domestically. The Model Bulletin on the Use of Artificial Intelligence Systems by Insurers, adopted in December 2023, has now been adopted by over half of all U.S. states. The Big Data and Artificial Intelligence Working Group's AI Systems Evaluation Tool is in a twelve-state pilot running January through September 2026, designed to help regulators assess carrier AI governance during market conduct examinations. As we analyzed in our coverage of the carrier AI audit-readiness gap, Grant Thornton's 2026 survey found that only 24% of insurers could pass an independent AI governance review. That governance deficit creates demand for both AI liability coverage and the governance consulting that precedes it.

Loss Frequency Signals: Early Claims Data Points

While credible loss triangles do not exist, early signal data on AI-related claims and litigation is beginning to accumulate. These data points do not constitute actuarial loss experience, but they provide the frequency and severity indicators that pricing actuaries need for initial parameterization:

Loss Category Data Point Source
Hallucinated legal citations 125+ filings with fabricated case references identified in early 2025; courts imposed $10,000+ sanctions in at least 5 cases ComplianceHub, court records
AI copyright litigation 164+ active AI copyright cases tracked as of 2026 AI Lawsuit Tracker
Training data IP settlements Anthropic: $1.5B settlement; Universal Music: $3.1B lawsuit filed January 2026 Court filings
Enterprise AI failure losses 64% of companies with $1B+ revenue report $1M+ losses from AI failures EY survey 2025
Medical device AI recalls 1,357 FDA-authorized AI devices; 60 involved in 182 recalls FDA database
AI defamation claims Multiple lawsuits filed against Meta, OpenAI for chatbot-generated false statements about named individuals Court filings, Damien Charlotin database

The severity distribution is already showing a bimodal pattern. Most AI-related claims to date involve relatively modest defense costs and sanctions in the tens of thousands. But the IP litigation tail extends into the billions. Pricing actuaries must decide how much credibility to assign to the tail events: if training data IP litigation produces several multi-billion-dollar verdicts, the loss ratio on current AI liability pricing could be catastrophic, regardless of how accurately the frequency component was estimated.

What Differentiates Corgi's Position

Several structural factors distinguish Corgi from competitors writing AI liability coverage:

Carrier status versus MGA dependency. Armilla writes through Lloyd's syndicates. Coalition operates as an MGA backed by Swiss Re. Munich Re's insureAI is a reinsurance-backed performance guarantee. Corgi writes on its own paper as a licensed carrier. In a hardening market or after adverse loss development, the MGA model faces capacity withdrawal risk that a carrier does not. Corgi's $268 million in total funding provides a capital cushion, though it remains thin relative to the potential severity of AI liability claims.

AI-native underwriting infrastructure. Because Corgi built its technology stack after the availability of large language models, its underwriting workflow can natively ingest AI governance attestations, model inventories, and benchmark scores as rating variables. Traditional carriers retrofitting AI liability onto legacy underwriting systems face integration friction that extends their time to market.

Startup customer concentration. Corgi's customer base skews toward AI-native startups, which means the portfolio's AI exposure is not a peripheral risk but the central underwriting proposition. This creates both an advantage (portfolio-level AI expertise) and a concentration risk (all insureds are deploying AI in production, with correlated exposure to model failures).

Speed-to-quote as a selection tool. The ten-minute quoting and same-day binding workflow attracts the startup market segment that traditional carriers serve slowly. This speed advantage doubles as a selection mechanism: companies seeking fast coverage may be those deploying AI aggressively and facing imminent compliance deadlines, which skews the book toward higher-risk insureds unless the underwriting model adjusts for selection effects.

The Cycle Ahead: Lessons From Cyber's Repricing

If AI liability insurance follows the cyber insurance trajectory, the market ahead will include a period of initial underpricing, rapid premium growth, a loss-driven correction, and eventual stabilization. The cyber market experienced this cycle visibly: premiums grew steadily through the 2010s as carriers competed for share with limited loss experience, then spiked 30% or more annually between 2020 and 2022 as ransomware losses overwhelmed initial rate assumptions. Swiss Re documented the subsequent growth slowdown to roughly 5% from 2022 to 2025 as pricing normalized.

Three dynamics suggest the AI liability correction cycle could be faster than cyber's:

  1. Regulatory forcing function: Cyber insurance demand grew organically from breach notification laws passed state by state starting in 2003. AI liability demand has a single, definitive catalyst: the EU AI Act's high-risk deadline, which affects every company deploying AI in life and health insurance across the European Union simultaneously. A regulatory-driven demand spike compresses the growth phase and accelerates the timeline to the first meaningful claims experience.
  2. Correlated loss exposure: Cyber losses are partially correlated (a zero-day vulnerability affects many organizations), but AI losses may be more deeply correlated because many enterprises rely on the same small set of foundation models. If OpenAI, Anthropic, or Google updates a model in a way that introduces systematic errors, losses propagate across the insured portfolio simultaneously. Our analysis of OpenAI's 90% concentration in carrier AI stacks documented this vendor concentration risk in the insurance industry specifically.
  3. Severity tail uncertainty: Cyber's worst-case loss scenarios are bounded by the financial value of data and ransom demands. AI liability's worst-case scenarios include IP litigation with multi-billion-dollar verdicts, class-action bias claims affecting millions of automated decisions, and autonomous system failures with bodily injury potential. The right tail of the AI liability severity distribution is wider and less well-characterized than cyber's was at the comparable market stage.

Why This Matters for Actuaries

Corgi's $1.3 billion valuation validates a market thesis: AI model risk is an insurable peril class with sufficient demand to sustain a standalone product line. For practicing actuaries, the implications span pricing, reserving, and governance:

Pricing actuaries working on technology E&O or commercial general liability books need to understand how the Verisk ISO exclusion interacts with their existing coverage forms. If the exclusion is attached, the AI exposure is removed from the CGL book, but the underlying loss potential migrates to standalone AI liability products or remains uninsured. Rating plans for the CGL book should reflect the reduced exposure, and competitive analysis should track the affirmative coverage products absorbing the excluded risk.

Reserving actuaries at carriers writing AI liability, whether standalone or embedded in tech E&O, face the immediate challenge of selecting development patterns for a line with no historical loss data. Analogy-based methods using cyber, professional liability, and IP litigation development patterns are the most defensible starting point, but the selection of analogue lines and the weighting between them will drive reserve adequacy more than any other assumption.

Enterprise risk actuaries should be evaluating their own organization's AI deployment exposure against the coverage available. The NAIC's evolving agentic AI governance framework will increasingly require carriers to demonstrate that their own AI systems meet the same governance standards they evaluate in their insureds. An actuary at a carrier using agentic AI for underwriting decisions needs to understand both sides of the AI liability equation: the risk they are insuring and the risk they are creating.

Patterns we have seen in recent pricing cycles suggest that the first carriers to build credible AI loss databases will have a durable competitive advantage, much as the early cyber insurers that invested in proprietary breach databases outperformed competitors relying on industry-level data. Corgi's full-stack carrier model, with direct access to claims data from policy inception, is designed to build that advantage from day one.

Further Reading

Sources