The relationship between actuaries and artificial intelligence has moved well past the “interesting experiment” phase. In a January 2026 article published through the SOA’s career development newsletter, actuary Carlos Arocha described what he called three converging forces reshaping the profession: exponential growth in AI capabilities, increasing data availability, and heightened expectations from regulators and stakeholders for more timely and sophisticated risk insights.

From tracking these developments across the profession, what strikes us most is not the pace of AI adoption itself—it’s how differently AI is being applied across actuarial functions. Pricing teams are using gradient boosting machines alongside traditional GLMs. Reserving actuaries are experimenting with neural networks applied to loss triangles. Generative AI tools are compressing weeks of data preparation into hours. And perhaps most significantly, actuaries are increasingly being asked to serve as AI governance professionals—the people responsible for ensuring that algorithmic decisions in insurance remain fair, transparent, and compliant.

This article surveys how AI and machine learning are actually being used in actuarial departments in 2026, what the governance and regulatory landscape looks like, and where the profession appears to be heading.

The State of AI Adoption in Insurance: By the Numbers

Before diving into specific actuarial applications, it’s worth grounding the conversation in recent adoption data.

The NAIC’s Big Data and Artificial Intelligence Working Group has been surveying insurers by line of business since 2021, and the picture that emerges from these surveys is one of near-universal engagement with AI. According to survey results compiled through 2025, 92% of health insurers, 88% of auto insurers, 70% of home insurers, and 58% of life insurers report current or planned AI usage. Insurance AI spending is expected to grow by more than 25% in 2026, according to industry forecasts.

However, adoption varies dramatically by function. Underwriting and claims—where AI automates document processing and triage—have moved fastest. Actuarial applications tend to be more cautious and methodical, which makes sense given the regulatory scrutiny that pricing models and reserve estimates receive.

A survey cited by Knapsack found that 70% of actuaries believe they need to develop new skills in data science and AI to remain competitive, while only 15% believe their jobs are at high risk of becoming obsolete. That gap—high awareness of the need to adapt, low fear of replacement—captures the profession’s current posture well.

How AI Is Being Applied Across Core Actuarial Functions

Pricing and Ratemaking

This is arguably where machine learning has made the deepest inroads into traditional actuarial work. Research published through the SOA and in actuarial journals shows that ML applications such as gradient boosting machines (GBMs) and feed-forward neural networks can outperform traditional GLMs in predictive accuracy, particularly when capturing complex nonlinearities in claims behavior.

In practice, what this looks like varies by line of business. In personal auto insurance, telematics data—driving speed, braking patterns, time of day—generates the kind of high-dimensional, interaction-heavy datasets that ML models handle well. Traditional GLMs struggle to capture all the relevant interactions among dozens of telematic variables, whereas tree-based models like XGBoost and LightGBM can identify complex patterns without explicit feature engineering.

In P&C commercial lines, insurers are using AI models to analyze loss runs, financial statements, and even satellite imagery alongside traditional rating variables. The result is more granular risk differentiation—though the actuarial challenge of explaining these models to regulators remains significant.

An important nuance worth noting: in most organizations, ML models are supplementing GLMs rather than replacing them outright. A common pattern we’ve observed in industry discussions is the “champion-challenger” approach, where the traditional GLM serves as the production model while ML models run in parallel to identify where the GLM’s predictions diverge most from observed experience. This helps pricing actuaries focus their attention on the segments where the current rates are most likely to be inadequate.

RNA Analytics published a practical case study demonstrating this approach using the CAS French motor insurance dataset: by comparing AI-based pricing (using GBMs and neural networks) to current pricing and rejecting the lowest 10% of policies by AI-to-current premium ratio, the portfolio’s loss ratio improved by approximately 5%. The study’s author, a principal actuarial consultant, emphasized that actuaries should not rely solely on AI-generated results but must validate and interpret outcomes using tools like SHAP (SHapley Additive Explanations) for model interpretability.

Reserving and IBNR Estimation

Reserving has been slower to adopt ML than pricing, and for good reason: reserve estimates carry direct financial statement implications, and regulators expect actuaries to be able to explain their methods clearly. The chain ladder method and its variants have persisted precisely because they’re well-understood by actuaries, auditors, and regulators alike.

That said, machine learning is making meaningful contributions to reserving in several ways.

At the aggregate level, the chainladder-python library (maintained by the CAS open-source community) provides a scikit-learn-compatible framework for traditional reserving methods. The related Tryangle framework extends this by applying ML optimization techniques to automatically select optimal development factors and blend between multiple IBNR models, minimizing reserve prediction error. This represents a practical middle ground: traditional actuarial methods enhanced by algorithmic optimization rather than replaced by black-box models.

At the individual claim level, researchers have proposed micro-level reserving approaches using neural networks and gradient boosting to model individual claim development. These models incorporate claim-specific features—line of business, injury type, claimant age—that aggregate triangle methods cannot capture. Published research has shown that these approaches can provide more accurate predictions for non-homogeneous portfolios, though the computational and governance complexity is substantially higher.

Generative AI is also beginning to affect reserving workflows, primarily by automating the data preparation that consumes a disproportionate share of actuarial time. As documented in V7 Labs’ 2026 guide to generative AI in insurance, data gathering and cleaning consumes an estimated 60–80% of actuarial time on most projects. AI agents that can extract claims data from financial statements, regulatory filings, and policy documents—and deliver it in a structured format ready for modeling—are compressing what used to be weeks of work into days.

The practical implication for reserving actuaries is that AI is not replacing the judgment calls (which development pattern to select, whether to adjust for large losses, how to handle exposure changes), but it is dramatically reducing the time spent on everything that precedes those judgment calls.

Underwriting and Risk Selection

Underwriting is the actuarial-adjacent function where AI has arguably had the most visible impact. Machine learning algorithms now screen applications, validate documents for completeness and consistency, and assess risk profiles using a broader set of data points than traditional underwriting processes could handle.

The Actuaries Institute published a detailed case study in July 2025 examining how a life insurer was implementing ML algorithms across the underwriting process: using virtual assistants to guide applicants, supervised and unsupervised learning to screen documents and flag inconsistencies, and predictive algorithms to assess coverage suitability based on personal information, financial circumstances, and credit history.

The actuarial role in these initiatives is evolving from pure technical modeling toward strategic oversight. The case study emphasized that actuaries bring critical value in three areas: communicating complex strategic plans to ensure operational alignment, leading risk assessments that identify data privacy, bias, and regulatory compliance concerns, and designing validation frameworks for algorithmic underwriting decisions.

Claims Processing and Fraud Detection

Claims is where AI’s document-processing capabilities have the most immediate application. Natural language processing models can interpret claims notes, extract key information from policy documents, and flag patterns consistent with fraudulent activity. Predictive AI can analyze images—of damaged vehicles, properties, or medical documentation—to assess claim severity and identify inconsistencies.

For actuaries specifically, AI-driven claims analytics create a feedback loop that improves pricing and reserving. When claims triage is automated and more granular data is captured at each stage of the claims lifecycle, actuaries gain access to richer datasets for experience studies, trend analysis, and model calibration.

The Governance and Regulatory Landscape

If there’s one area where the actuarial profession’s existing strengths align most directly with AI adoption challenges, it’s governance. Actuaries have been managing model risk, documenting assumptions, and submitting to regulatory review for decades. AI governance is, in many ways, an extension of that existing discipline—but with new technical complexities and heightened public scrutiny.

The NAIC Model Bulletin and State Adoption

The most significant U.S. regulatory development for AI in insurance has been the NAIC’s Model Bulletin on the Use of Artificial Intelligence Systems by Insurers, adopted in December 2023. The bulletin requires insurers to develop, implement, and maintain a written AI program covering governance, risk management, internal controls, and third-party vendor oversight.

As of late 2025, over half of all states and Washington, D.C. had adopted the NAIC Model Bulletin in full or substantially similar form. The NAIC is also developing an AI Systems Evaluation Tool—essentially a standardized questionnaire and checklist for market conduct examinations—with pilot programs expected in early 2026.

Several states have gone further with their own frameworks. New York’s DFS Circular Letter 2024-7 requires insurers to demonstrate that AI and external data systems do not proxy for protected classes. Colorado’s C.R.S. §10-3-1104.9 prohibits predictive models that result in unfair discrimination, with implementation delayed to June 2026. California has taken a distinct approach emphasizing transparency and training data disclosure.

The direction of travel is clear: regulators expect documented governance, bias testing, explainability, and human oversight for any AI system that affects consumer outcomes. For actuaries, this means that the ability to explain how a model works—and to demonstrate that it doesn’t produce unfairly discriminatory results—is becoming as important as the model’s predictive accuracy.

The IAA Governance Framework

At the international level, the International Actuarial Association’s (IAA) Artificial Intelligence Task Force has proposed a governance framework emphasizing the entire model lifecycle: design, data sourcing, validation, monitoring, and decommissioning. The framework reflects lessons from traditional actuarial control cycles adapted for AI’s unique challenges, and places “human-in-the-loop” oversight at its center.

The IAA has also produced comparative analyses of AI governance approaches across multiple jurisdictions—including the U.S., EU, UK, Canada, and Australia—recognizing that actuaries increasingly work across regulatory boundaries and need to navigate divergent requirements.

The EU AI Act

For actuaries working in or with European markets, the EU AI Act adds another layer of compliance. The Act classifies AI systems by risk level, with insurance pricing and underwriting applications likely falling into the “high-risk” category. By August 2026, companies will need to comply with specific transparency requirements and rules for high-risk AI systems, though the European Commission has signaled a possible extension to December 2027 amid industry readiness concerns.

Where Actuaries Fit: From Modelers to Strategic AI Leaders

Perhaps the most consequential development isn’t any specific AI application—it’s how the actuarial role itself is evolving.

The SOA’s January 2026 article on AI transformation in actuarial science called for actuaries to transition from traditional modeling toward “strategic data leadership.” The EY analysis of actuaries in an AI-enhanced world identified a similar shift: enhanced analytical capabilities enabled by AI, increased automation of manual and repetitive tasks, and growing demand for skills at the intersection of actuarial science, data science, and AI ethics.

From patterns we’ve observed in industry discussions and job postings, this evolution is manifesting in several concrete ways:

Actuaries as model validators. As insurers deploy more ML models in production, they need professionals who can assess whether those models are statistically sound, actuarially appropriate, and compliant with regulatory standards. Actuaries’ existing training in model construction, assumption-setting, and regulatory compliance makes them natural fits for this role.

Actuaries as AI governance leads. The NAIC Model Bulletin and state-level regulations require documented oversight structures for AI systems. Many insurers are placing actuaries in governance roles that bridge the gap between data science teams (who build the models) and compliance teams (who manage regulatory requirements).

Actuaries as translators. AI models can be technically impressive but commercially useless if stakeholders don’t trust or understand them. Actuaries are increasingly valued for their ability to translate between technical model output and business decisions—explaining to underwriters why the model recommends a different price, or to regulators why the model doesn’t produce unfairly discriminatory outcomes.

Actuaries as prompt engineers and AI workflow designers. A more recent development: actuaries are learning to use generative AI tools effectively for their specific workflows. This includes writing prompts that produce useful assumption memos, designing AI-assisted data extraction pipelines, and evaluating whether AI-generated analysis meets professional standards.

The Skills Gap and How to Close It

The profession’s response to the AI shift has been substantive. The SOA’s Predictive Analytics Certificate Program—now available through PD Edge+ as of January 2026—provides structured professional development in predictive modeling and data analytics. The CAS expanded MAS-I and MAS-II to four sittings per year starting in 2026. Georgia State University launched an interdisciplinary Master’s program blending actuarial science with AI and information systems.

For working actuaries looking to upskill, the practical starting points are:

Python fluency. Not optional anymore for actuaries who want to work with AI tools. Libraries like scikit-learn (for ML modeling), chainladder-python (for reserving), and SHAP (for model explainability) are the core toolkit. See our detailed guide: Getting Started with Python as an Actuary.

Understanding model explainability tools. SHAP values have become the industry standard for interpreting ML model predictions. Understanding how to generate, read, and communicate SHAP analyses is increasingly expected for actuaries working with ML-augmented pricing or underwriting models.

AI governance frameworks. Familiarity with the NAIC Model Bulletin, the IAA’s governance framework, and applicable state/international regulations is essential for actuaries in supervisory or consulting roles.

Generative AI literacy. Knowing how to use LLMs effectively for actuarial tasks—and understanding their limitations (hallucination, inconsistency, lack of actuarial judgment)—is a practical skill that saves time without compromising professional standards.

What’s Coming Next

Several developments in the pipeline will shape how actuaries interact with AI over the next 12–24 months.

The NAIC’s AI Systems Evaluation Tool pilot programs, expected in early 2026, will provide the first standardized framework for how regulators examine insurers’ AI governance in practice. The results will likely influence whether the NAIC moves toward a comprehensive AI model law—a question the Big Data Working Group discussed extensively throughout 2025.

Agentic AI—autonomous systems that can execute complex multi-step workflows without human intervention at each step—is moving from concept to early implementation in insurance. For actuaries, this raises important questions about where human judgment remains essential in processes that AI can technically handle end-to-end.

The convergence of climate modeling and AI is creating demand for actuaries who can build and validate ML models for catastrophe risk, parametric insurance pricing, and dynamic exposure assessment—areas where traditional actuarial methods face genuine limitations.

And the open-source actuarial Python ecosystem continues to grow, with projects like chainladder-python, lifelib, Tryangle, and GEMAct lowering the barrier for actuaries to experiment with AI-enhanced workflows in their own practice areas.

The fundamental message from tracking these trends is consistent: AI is not replacing actuarial judgment, but it is dramatically raising the bar for what “actuarial judgment” means. The actuaries who thrive will be those who can deploy AI tools skillfully, govern them responsibly, and explain them clearly—while maintaining the professional skepticism and ethical commitment that have always defined the profession.