Between April 2025 and the first quarter of 2026, the legal ground under every issued insurance AI patent shifted. The Federal Circuit’s decision in Recentive Analytics, Inc. v. Fox Corp. gave district courts a clean template for knocking out machine learning claims at the motion-to-dismiss stage. The Supreme Court declined to revisit the question in December 2025. The USPTO, under new leadership, rescinded large portions of the Biden-era AI inventorship guidance on November 28, 2025, then issued an advance notice revising the Manual of Patent Examining Procedure on December 5, 2025. First office actions applying the revised framework are landing now. From tracking USPTO filings across the top 20 insurance carriers and vendors over the past 18 months, we have identified a clear pattern: claims that tie AI model outputs to specific actuarial calculations, such as loss development or credibility weighting, fare better under the new standard than broad “apply ML to insurance” claims. This article lays out what changed, what it means for patents already issued, and how to read exposure at a portfolio level. A companion piece covering carrier- and vendor-specific enforceability analysis will follow in this series.

What actually changed in late 2025

Patent eligibility under 35 U.S.C. § 101 has been contested territory since the Supreme Court’s 2014 decision in Alice Corp. v. CLS Bank. The two-step Alice/Mayo framework asks, first, whether a claim is directed to a judicial exception such as an abstract idea, and second, whether the claim contains an inventive concept that transforms the exception into a patent-eligible application. For more than a decade, AI and machine learning claims have sat uneasily inside this framework, with outcomes varying sharply based on how claims were drafted and how sympathetic the forum was to software patents.

Three developments compressed into roughly six weeks reset the landscape.

First, on November 28, 2025, the USPTO under new leadership formally rescinded the February 2024 Inventorship Guidance for AI-Assisted Inventions, which had permitted broader acceptance of patent claims where an AI system played a significant role in conception. The rescission reframes AI systems as tools only, and declines to establish a separate eligibility standard for AI-assisted inventions.

Second, on December 5, 2025, Deputy Commissioner for Patents Charles Kim issued an advance notice of changes to the MPEP incorporating the Patent Trial and Appeal Board’s September 26, 2025 decision in Ex parte Desjardins, which the Office designated precedential on November 4, 2025. The Desjardins revisions to MPEP sections 2106.04(d), 2106.05(a), and 2106.05(f) push examiners toward finding eligibility where the claim as a whole reflects a technological improvement, even if the specification does not expressly label it as such. That is the pro-patent half of the reset.

Third, and cutting the other way, on December 8, 2025, the Supreme Court denied certiorari in Recentive Analytics, Inc. v. Fox Corp., leaving in place the Federal Circuit’s April 2025 holding that generic applications of machine learning to new data environments are ineligible under § 101. The restrictive force lives at the Federal Circuit, not at the Office.

The net result is a bifurcated system. At the USPTO, new applications with careful claim drafting have better odds than they did a year ago. At the Federal Circuit, already-issued patents with broad “apply ML to insurance” claims just lost most of their litigation value.

The Recentive Analytics precedent

The operative opinion runs to 19 pages and is worth reading in full for anyone whose balance sheet touches AI patent value. Recentive Analytics, Inc. v. Fox Corp., 134 F.4th 1205 (Fed. Cir. 2025) involved four patents directed to using machine learning to optimize television broadcast schedules and network maps. Claim 1 of the ’367 patent recited collecting event parameters and target features, iteratively training a model to identify relationships, generating a schedule, and updating the schedule in real time as inputs changed.

That claim structure, collecting data, training a model, generating an output, updating with new data, describes a substantial fraction of the insurance AI patents issued between 2018 and 2024. The Federal Circuit found it insufficient.

Under Alice step one, the panel held the claims were directed to abstract ideas of producing network maps and event schedules using known generic mathematical techniques. The court emphasized that Recentive had conceded its patents did not claim improvements to machine learning itself but merely applied existing methods to a new environment. Under step two, the court found no inventive concept because the machine learning techniques were broad, functionally described, well-known techniques implemented on generic computing equipment.

The holding most cited by litigators is narrow on its face: patents that do no more than claim the application of generic machine learning to new data environments, without disclosing improvements to the machine learning models to be applied, are patent ineligible under § 101. The practical reach is far broader. Four follow-on moves by the panel matter for insurance patents specifically.

The court rejected the argument that iterative training on updated data creates eligibility, treating such training as incident to the very nature of machine learning. The court reaffirmed that an abstract idea does not become nonabstract by limiting the environment, foreclosing the common insurance argument that applying ML to underwriting or claims is itself the inventive step. The court held that performing a previously human task with greater speed and efficiency does not confer eligibility, which undercuts specification language framed around efficiency gains. And the court faulted the absence of a “how,” the lack of described mechanisms for achieving the claimed improvements, inviting the functional-claiming attack that now dominates early § 101 motions.

The Supreme Court’s December 2025 cert denial made all of this durable law. Insurance AI patent holders cannot wait for a corrective decision from above.

The mental process and improvement divide

The August 2025 memorandum from the Office, sometimes referred to as the Squires memo after new USPTO Director John Squires, issued important guardrails on how examiners apply § 101. Those guardrails survived the November and December guidance changes and now define the pro-patent side of the reset.

First, the August 2025 memo reminds examiners that the mental process grouping of abstract ideas applies only to steps that can practically be performed in the human mind. Claims that require training a neural network, updating billions of parameters, or processing data volumes beyond human capacity should not be characterized as mental processes. For insurance AI claims that tie model outputs to granular actuarial inputs (policy-level exposure data, claim-level development triangles, sub-segmented mortality tables), this is a meaningful shield.

Second, the memo draws a line between claims that recite a judicial exception and claims that merely involve one. Claims involving mathematical concepts without expressly reciting them may not require a Step 2A/2B analysis at all.

Third, the December 5, 2025 MPEP revisions drawn from Ex parte Desjardins clarify that a specification need not explicitly label an invention as an improvement, so long as the improvement would be apparent to a person of ordinary skill. The revisions add examples of claims directed to improved machine learning training that protects prior knowledge while learning new tasks, and claims achieving performance enhancements through parameter adjustments in task-based models.

For insurance AI, the divide is now sharp. Claims grounded in concrete technical mechanisms (specific training methodologies, data structures, model architectures, constraint formulations) sit on the Desjardins side of the line. Claims grounded in the business purpose of the system (predicting claim severity, optimizing premium) sit on the Recentive side. Drafting choices made in 2020 and 2021 often did not anticipate this divide, which is why a meaningful slice of issued insurance AI patents now looks different through a 2026 lens than it did when it was filed.

The four channels of post-grant exposure

An issued patent does not get re-examined for § 101 compliance automatically. Exposure for already-issued insurance AI patents flows through four distinct channels, each with its own cost profile, timeline, and strategic posture.

Channel 1: continuation practice. Many insurance AI patent families include pending continuations. Those continuations go through examination under the current MPEP. Examiners with access to both the Squires memo and the Recentive opinion will scrutinize broadly claimed machine learning applications more aggressively than 2020-era examiners did. Patent owners with portfolios built on broad parent claims often find that the continuations that would have extended those claims cannot clear § 101 under the new framework. The portfolio stops growing where it matters.

Channel 2: reissue and reexamination. A patent owner can seek reissue to correct defects, and third parties can request ex parte reexamination to challenge claims against printed prior art. Reissue is often the quiet vehicle for tightening § 101 exposure by narrowing claims to match what the specification actually teaches. Reexamination is a cheaper route for accused infringers to force claim narrowing without the cost of a district court case.

Channel 3: PTAB proceedings. Inter partes review and post-grant review let third parties challenge issued patents at the Board. IPR is limited to anticipation and obviousness grounds, but post-grant review, available for the first nine months after issuance, covers § 101 directly. For insurance AI patents issued after late 2024, PGR is now a live threat. For older patents, IPR combined with district court § 101 motions is the parallel-track strategy that Venable and other defense-side firms are recommending.

Channel 4: district court § 101 motions at the pleading stage. This is the channel that changed most dramatically after Recentive. The Federal Circuit has now approved § 101 challenges presented as early motions to dismiss even on machine learning claims. An accused infringer facing an insurance AI patent suit no longer needs to invest in fact discovery to make a § 101 argument stick. A well-drafted motion to dismiss, anchored in Recentive, can terminate the case before the patent owner’s expected damages model ever gets scheduled.

The four channels compound. A patent that survives Channel 1 can still be invalidated through Channel 4. A portfolio that looks strong on the examination side can still lose enforcement leverage in litigation. For carriers, vendors, and insurtechs holding AI patent portfolios, exposure analysis must run across all four channels simultaneously.

Exposure tiers: a framework

Based on the claim architectures we see recurring across insurance AI patents, a four-tier exposure framework captures most of the portfolio analysis we have run since the Recentive opinion dropped. The tiers describe claim characteristics, not company names. The companion piece to this article applies these tiers to specific carriers and vendors.

Tier 1, high exposure. Claims that recite collecting insurance data, training a generic machine learning model, and outputting a prediction (risk score, claim severity estimate, fraud flag) without specifying model architecture improvements, training methodology improvements, or data structure improvements. These claims map almost exactly onto the Recentive fact pattern. Patterns we have seen in the 2018-2022 insurtech filing wave put a meaningful share of those patents in this tier. Defensive value remains. Offensive litigation value is substantially reduced.

Tier 2, medium exposure. Claims that recite a specific technical mechanism (a particular training technique, a model architecture adaptation, a feature engineering step) but describe it at a level of generality that a motion-to-dismiss court can characterize as functional. These claims survive or fail based on specification support and on the forum. Drafting quality, prosecution history, and the strength of the technical improvement narrative in the specification drive the outcome.

Tier 3, lower exposure. Claims tied to specific, non-generic computing improvements: claims that reduce memory usage in a specific way, claims that enable training on data volumes a human could not process, claims that produce a measurable improvement in a concrete technical metric documented in the specification. These claims look like Ex parte Desjardins and Ex parte Allen. They are defensible on both the examination and litigation sides.

Tier 4, most defensible. Claims that integrate AI model outputs with specific actuarial calculations or regulated workflows (loss development triangles, credibility weighting, rate filing inputs, reserve calculations under a specific methodology). The tie to a regulated actuarial or accounting mechanism provides a concrete application that courts have historically treated more favorably than business-purpose framing. The specification typically also anchors the claim in technical improvements to the underlying model or pipeline.

The pattern across tiers is consistent: enforceability tracks specificity, not breadth. The broadest claims, the ones that looked most valuable in 2020, are the ones most vulnerable in 2026.

Claim drafting strategies that survive

For new applications and for continuations off existing families, the post-Recentive playbook is relatively clear. Drafting choices that increase the odds of both examination allowance and post-grant durability cluster around five moves.

First, anchor improvements in the model itself. Claims should describe a specific way the machine learning model is modified, trained, or structured that a skilled artisan could distinguish from conventional practice. Characterizing the model as generic or suitable is now actively harmful; specifications should emphasize the modifications necessary for operability.

Second, tie outputs to regulated or technical workflows with specificity. For insurance applications, this means naming the actuarial calculation, the regulatory filing, or the system component the output feeds into, and describing how that integration differs from prior practice.

Third, include quantitative improvement data in the specification. The PTAB’s emphasis in Desjardins and in Ex parte Holtmann-Rice on memory usage, accuracy, and scalability improvements signals that concrete metrics help at Step 2A Prong 2. Subject Matter Eligibility Declarations under 37 CFR § 1.132, addressed in the USPTO’s December 4, 2025 memoranda, provide a formal vehicle for submitting this evidence.

Fourth, avoid efficiency framing as the primary value proposition. The Recentive panel was emphatic that doing a human task faster does not make a claim eligible. Speed and efficiency gains can appear in the specification as secondary benefits, but the primary improvement narrative should run through technical mechanisms.

Fifth, distinguish “recite” from “involve” in the claim language itself. Claims that involve a mathematical concept as part of a broader technical method read more favorably than claims that recite the mathematical concept as the core inventive step. This is a drafting discipline rather than a legal argument, and it is visible to examiners from the first office action.

What to watch in 2026

Three developments are worth tracking for the remainder of the year.

The first is Federal Circuit follow-on cases applying Recentive. The opinion left open whether enhanced accuracy, efficiency, or scalability can qualify as an eligibility-conferring improvement, and what magnitude of improvement is required. Panels over the next 12 months will begin to fill in those blanks, and each decision has direct implications for insurance AI portfolios.

The second is the first generation of insurance AI § 101 motions in district court. Publicly filed motions, briefing, and decisions are starting to appear. The specific claim language courts find dispositive will refine the exposure framework above and will feed into M&A diligence checklists.

The third is congressional action. The Patent Eligibility Restoration Act has been introduced in the 119th Congress and would abrogate the Alice/Mayo framework in favor of a statutory list of ineligible categories. Whether it advances in any form in 2026 will determine whether the Recentive framework hardens into the durable rule for the decade or becomes a transitional artifact.

For insurance carriers, vendors, and the insurtech wave that filed heavily between 2018 and 2022, the practical work for 2026 is portfolio triage. Sorting existing patents into the four exposure tiers, identifying which claims can be tightened through reissue or continuation, and recalibrating litigation strategy to account for the motion-to-dismiss risk are all tasks that cannot wait.

A company-by-company breakdown of how the new framework affects the enforceability of specific carrier and vendor portfolios, including AIG, Travelers, EXL, Quantiphi/Dociphi, and the 2018-2022 insurtech wave, is covered in the companion piece to this article, Insurance AI Patent Enforceability After Recentive: Company-by-Company Analysis (forthcoming in this series).

Sources

  1. Federal Circuit, Recentive Analytics, Inc. v. Fox Corp., No. 2023-2437 (Fed. Cir. Apr. 18, 2025) (opinion, 134 F.4th 1205).
  2. USPTO, Updates to Subject Matter Eligibility Guidance in the MPEP (Dec. 5, 2025).
  3. USPTO, Memorandum: Reminders on Evaluating Subject Matter Eligibility of Claims under 35 U.S.C. 101 (Aug. 4, 2025).
  4. USPTO, Subject Matter Eligibility Guidance Hub (current).
  5. Venable LLP, The § 101 Reset for 2026: New USPTO Guidance on AI Eligibility and When Early Motions Matter (Dec. 2025).
  6. Greenberg Traurig, Federal Circuit: Machine Learning Patents Ineligible in Recentive Analytics, Inc. v. Fox Corp. (Apr. 2025).
  7. Cleary Gottlieb, Recentive Analytics v. Fox Corp.: A Case of First Impression on Machine Learning and § 101 (May 2025).
  8. Mintz, Recentive Analytics v. Fox: The Federal Circuit Provides Analysis on Patent Eligibility of Machine Learning Claims (May 2025).
  9. Bracewell, Recentive v. Fox: Machine-Learning Claims Fail to Make the Grade (2025).
  10. Sterne Kessler, 2025 Federal Circuit IP Appeals: Recentive Analytics (Feb. 2026).
  11. Holland & Knight, Top Section 101 Patent Eligibility Stories of 2025 (Dec. 2025).
  12. Fish & Richardson, Federal Circuit Clarifies Limits of Patent Eligibility for Machine Learning Claims (May 2025).
  13. Dykema, AI and Software Patents in 2025: New Leadership and § 101 Eligibility Guidance (Feb. 2026).
  14. Morgan Lewis, PTAB Signals New Trends Favoring Patent Owners, Reduces Section 101 Hurdles for AI Inventions (Oct. 2025).
  15. Sterne Kessler, Navigating § 101 Rejections in AI and ML Patent Applications (2024).
  16. Congressional Research Service, Patent-Eligible Subject Matter Reform: An Overview (Jan. 2026).
  17. USPTO, PTAB Precedential and Informative Decisions (current, including Ex parte Desjardins).

Further Reading on actuary.info

Stay ahead with daily actuarial intelligence - news, analysis, and career insights delivered free.

Subscribe to Actuary Brew Browse All Insights