From tracking the evolution of actuarial modeling tools over the past several years, one shift stands out as defining the profession's trajectory more than any other: the rapid integration of predictive analytics into virtually every facet of insurance underwriting. What began decades ago with generalized linear models fitted in SAS and Emblem has accelerated into an era of gradient-boosted decision trees, deep neural networks, and (as of 2025) agentic AI platforms that can autonomously evaluate submissions, score risk appetite, and recommend pricing adjustments in real time.

The scale of this transformation is striking. A recent survey found that 77% of insurers are at least piloting AI initiatives, up 16 percentage points from the prior year. The global AI-for-insurance market reached an estimated $10.3 billion in 2025, growing at a 33% annual rate, and is projected to reach $35.8 billion by 2029. McKinsey's July 2025 report on AI in insurance found that the small cohort of insurers that have successfully scaled AI across their operations generated 6.1 times the total shareholder return of laggards over five years, a performance spread wider than in most other industries.

Yet this transformation is far from simple. Twenty-four states have now adopted the NAIC's Model Bulletin on the Use of AI Systems by Insurers, and Colorado's landmark AI Act (with its requirements for bias testing, impact assessments, and consumer disclosures) is set to take effect by mid-2026. Both the SOA and CAS have embedded predictive analytics deeply into their credentialing pathways, with the CAS making its new Property and Casualty Predictive Analytics (PCPA) requirement mandatory for all ACAS candidates as of January 1, 2026.

This article examines the current state of predictive analytics in insurance underwriting, the foundational and emerging modeling techniques actuaries must understand, the regulatory landscape governing algorithmic decision-making, the InsurTech platforms reshaping underwriting workflows, and the career implications for actuaries at every level of the profession.

The Foundation: GLMs and Their Enduring Role in Insurance Pricing

Any discussion of predictive analytics in insurance must begin with generalized linear models. GLMs have been the workhorse of actuarial pricing since the 1990s and remain the most widely deployed modeling framework in the industry today. Their enduring popularity reflects a combination of statistical rigor, interpretability, and regulatory acceptance that newer techniques have yet to fully replicate.

In a standard P&C pricing application, actuaries build separate GLMs for claim frequency and claim severity. A Poisson GLM with a log link typically models claim counts as a function of rating variables (territory, driver age, vehicle type, coverage tier, and so on) while a Gamma GLM handles average severity. The product of predicted frequency and predicted severity yields the pure premium, which after loading for expenses, profit margin, and contingency provisions becomes the technical rate. This frequency-severity framework, formalized in the CAS's Monograph No. 5 on Generalized Linear Models for Insurance Rating, remains the standard approach across personal auto, homeowners, workers' compensation, and commercial lines.

Patterns we have observed across recent regulatory filings and industry publications suggest that GLMs will remain dominant for rate filings for at least the next several years. Regulators in most jurisdictions still expect filed rates to be supported by models whose factor relativities can be expressed as explicit multiplicative relationships, something GLMs provide naturally through their exponential link functions. A GLM coefficient of 0.15 for a particular territory translates directly to a 16.2% surcharge relative to the base level, making it straightforward for regulators, underwriters, and consumers to understand why a specific premium was charged.

That said, the limitations of pure GLMs are well documented. They assume linear relationships between predictors and the log of the response (when using a log link), require manual feature engineering to capture interactions and nonlinear effects, and can struggle with high-dimensional data. These limitations have driven the adoption of more flexible techniques; but rather than replacing GLMs, these methods increasingly complement them.

Beyond GLMs: Machine Learning Enters the Underwriting Toolkit

The past five years have seen a rapid expansion of the actuarial modeling toolkit beyond traditional GLMs. Gradient-boosted decision trees (GBMs), implemented through libraries like XGBoost, LightGBM, and CatBoost, have become the dominant machine learning technique in insurance pricing and risk selection. Random forests, generalized additive models (GAMs), elastic net regularized GLMs, and neural networks round out the suite of methods that a modern pricing or underwriting actuary is expected to understand.

Research from the German Actuarial Association's Committee for Actuarial Data Science, which benchmarked multiple modeling approaches on a large French motor liability portfolio, provides instructive findings. Their analysis found that while standard GLMs produced the weakest predictive performance, GAMs, which extend GLMs to capture nonlinear relationships while preserving additivity, performed remarkably well and rivaled the predictive accuracy of neural networks with far less implementation complexity. GBM variants, particularly CatBoost and LightGBM, delivered the strongest overall predictive power, though the gains over well-constructed GAMs were modest.

This research illuminates a pattern we have seen repeatedly in actuarial practice: the marginal predictive improvement from the most complex machine learning models is often smaller than expected, particularly when the comparison model is a thoughtfully constructed GLM with appropriate interaction terms and nonlinear transformations. Where machine learning methods truly excel is in automated feature discovery: identifying interactions and nonlinear patterns that would take an actuary months to find through manual exploration.

In practice, many insurers are adopting hybrid approaches. A common architecture uses gradient-boosted models to discover variable interactions and nonlinear effects, which are then incorporated into a GLM that serves as the filed rating model. This "GBM-informs-GLM" workflow captures much of the predictive benefit of machine learning while preserving the interpretability and regulatory acceptance of the GLM framework. The Combined Actuarial Neural Net (CANN) approach, which nests a traditional GLM within a neural network architecture, represents a more sophisticated version of this philosophy, allowing the neural network to learn residual patterns that the GLM misses while maintaining the GLM's core structure.

Telematics and Real-Time Underwriting: The Data Revolution

Predictive analytics is only as powerful as the data feeding the models, and the data landscape available to insurers has transformed dramatically. Telematics (the use of GPS, accelerometers, and onboard diagnostics to monitor real-time driving behavior) represents the most consequential data innovation in personal lines underwriting since credit-based insurance scores.

By 2024, more than 21 million U.S. policyholders were sharing telematics data with their insurers, reflecting a 28% compound annual growth rate since 2018, according to the IoT Insurance Observatory. The global insurance telematics market was valued at approximately $4.5 billion in 2024 and is projected to grow at a 22% CAGR through 2030. Survey data indicates that 14.4% of personal lines motor policies globally now incorporate telematics, while 60% of policyholders expressed willingness to switch to a telematics-based plan when clear benefits were communicated.

The underwriting implications are profound. Traditional auto rating relies on proxy variables (age, territory, credit score, prior claims history) that correlate with risk but cannot measure actual driving behavior. Telematics allows insurers to observe hard braking frequency, cornering acceleration, time-of-day driving patterns, phone usage while driving, and total miles traveled. Progressive's Snapshot program, Allstate's Drivewise platform, and direct-from-vehicle integrations with manufacturers like Tesla and General Motors represent the leading edge of this approach.

For actuaries, telematics data creates both opportunities and challenges. The opportunity lies in dramatically improved risk segmentation: safe drivers who are penalized by traditional rating factors can be identified and offered competitive rates, while risky drivers are priced more accurately. The challenge lies in the sheer volume and complexity of the data. A single vehicle can generate gigabytes of sensor data per month, requiring sophisticated data engineering pipelines and feature extraction methods before the data becomes usable in pricing models.

Beyond personal auto, similar IoT-driven data revolutions are transforming other lines. Smart home devices (water leak sensors, smoke detectors, security systems) provide real-time property risk data. Wearable fitness trackers inform life and health underwriting, with programs like John Hancock's Vitality offering premium incentives tied to health behaviors. In commercial lines, fleet telematics, workplace safety sensors, and industrial IoT enable continuous risk monitoring rather than point-in-time underwriting assessments.

The InsurTech Platform Ecosystem: From Point Solutions to Agentic AI

The InsurTech sector has matured significantly since the initial wave of venture-funded disruption in the mid-2010s. In underwriting analytics specifically, two categories of platforms have emerged as particularly influential: industry-wide predictive intelligence platforms and full-lifecycle underwriting systems powered by agentic AI.

Gradient AI exemplifies the first category. The company leverages a federated data lake encompassing tens of millions of policies and claims across workers' compensation, commercial auto, group health, and general liability. Its predictive models allow carriers, MGAs, and TPAs to score individual submissions for expected profitability, flag emerging claims trends before they materialize in loss ratios, and benchmark their portfolio performance against industry patterns. In April 2025, Gradient launched an enhanced workers' compensation underwriting risk score that improves risk segmentation granularity while aligning with evolving state regulatory frameworks.

Federato represents the second category: full-lifecycle platforms with AI at their architectural core rather than bolted onto legacy systems. The company raised $100 million in Series D funding in November 2025 led by Goldman Sachs, having more than tripled its revenues during the prior year. Federato's platform provides real-time portfolio visibility, AI-driven appetite scoring, automated submission triage, and what the company describes as "agentic AI," intelligent systems that not only process data but proactively guide underwriters toward optimal risk selection decisions. The platform reportedly reduces quote processing times by up to 89% while improving hit ratios and portfolio balance.

The distinction between traditional AI and agentic AI is significant for the industry's trajectory. Traditional predictive models provide scores and recommendations that human underwriters interpret and act upon. Agentic AI systems can independently execute multi-step workflows (extracting submission data from emails, scoring risk appetite, pulling third-party data via API, generating quotes, and flagging complex referrals for human review) with minimal human intervention. McKinsey's 2025 report identifies three AI disciplines transforming insurance: traditional analytical AI for pattern recognition, generative AI for unstructured data processing and communication, and agentic AI for autonomous workflow execution.

This evolution has direct implications for the actuarial profession. Actuaries increasingly find themselves designing, validating, and governing the predictive models embedded within these platforms rather than manually building one-off models in isolation. The governance function (ensuring models perform as intended, do not produce unfairly discriminatory outcomes, and remain calibrated as data distributions shift over time) has become as important as the model-building function itself.

Regulatory Landscape: The NAIC Model Bulletin and State AI Laws

The regulatory environment governing predictive analytics in insurance has intensified significantly. The NAIC adopted its Model Bulletin on the Use of Artificial Intelligence Systems by Insurers in December 2023, and as of early 2025, 24 states had adopted it in full or substantially similar form. The Model Bulletin requires insurers to establish a documented AI program (AIS Program) governing all AI systems used in regulated decisions, with provisions for governance, risk management, internal audit, bias testing, and third-party vendor oversight.

The NAIC's own surveys reveal both the breadth of AI adoption and the gaps in governance. Across surveyed lines, 92% of health insurers, 88% of auto insurers, 70% of home insurers, and 58% of life insurers reported current or planned AI usage. Yet nearly one-third of health insurers still did not regularly test their models for bias or discrimination, a finding that has accelerated regulatory attention to enforcement.

In 2025 and 2026, several regulatory developments are converging to heighten compliance requirements. The NAIC's Big Data and Artificial Intelligence Working Group is developing an AI Systems Evaluation Tool (a standardized questionnaire and checklist for use in regulatory examinations) with a pilot project summarized in February 2026. The Working Group has also issued a request for information regarding a potential NAIC Model Law on the Use of Artificial Intelligence in the Insurance Industry, which would carry greater legal weight than the current model bulletin. A separate Third-Party Data and Models Working Group is developing oversight frameworks for vendors that supply data and models to insurers, with a model law anticipated in 2026.

At the state level, Colorado's AI Act (SB 24-205) represents the most comprehensive state framework. Originally set for February 1, 2026, implementation was postponed to June 30, 2026 following legislative negotiations. The law requires developers and deployers of high-risk AI systems, explicitly including insurance underwriting, pricing, and claims, to implement risk management programs, conduct annual impact assessments, test for algorithmic discrimination, provide consumer disclosures when AI contributes to adverse decisions, and offer appeal processes with human review. Colorado's separate insurance-specific regulation under C.R.S. ยง10-3-1104.9 already requires quantitative bias testing for life insurers using external consumer data and predictive models, with extension to private passenger auto and health expected.

For actuaries, these regulatory requirements translate into concrete professional obligations. Model validation documentation must demonstrate that predictive models do not produce unfairly discriminatory outcomes across protected classes. Disparate impact testing, which analyzes whether model outputs systematically disadvantage racial, ethnic, or other protected groups even when race is not an explicit input, is becoming a standard component of model governance. Explainability requirements mean that "black box" models deployed without interpretability tools face increasing regulatory risk, reinforcing the value of GLMs and SHAP-based explanation methods for tree-based and neural network models.

Professional Credentialing: SOA Exam PA, ATPA, and CAS PCPA

Both the SOA and CAS have responded to the profession's predictive analytics transformation by embedding these skills deeply into their credentialing pathways, a signal of how essential data science competencies have become for practicing actuaries.

The SOA's Exam PA (Predictive Analytics) has been an ASA requirement since December 2018. The exam tests candidates' ability to apply statistical modeling and data analytics techniques (including multiple linear regression, regularization, GLMs, decision trees, random forests, and gradient boosting) to solve business problems using R. Since its inception, Exam PA has seen 27,028 total attempts across 15 sittings, with 17,041 passes and a cumulative effective pass rate of 63%, according to Actuarial Lookup. The most recent sitting in October 2025 saw 2,027 candidates write with a 66.7% pass rate. Pass rates have generally stabilized in the mid-60% range after early sittings produced pass rates below 55%, suggesting candidates and study materials have matured alongside the exam.

Beyond PA, the SOA offers the Advanced Topics in Predictive Analytics (ATPA) assessment, which extends into more sophisticated techniques including deep learning, natural language processing, and advanced model interpretability methods. ATPA is required for candidates who did not obtain credit for the now-discontinued Exam IFM, further embedding data science competencies into the ASA pathway.

The CAS took a decisive step by making its Property and Casualty Predictive Analytics (PCPA) requirement mandatory for all ACAS candidates effective January 1, 2026. PCPA consists of two components: a two-hour, 40-question multiple-choice examination on predictive modeling fundamentals administered on-demand at Pearson VUE test centers, and a two-week experiential project in which candidates build a GLM to address an insurance business problem and submit a technical report. The exam covers GLMs (Gamma, Poisson, binomial, Tweedie), regularization, decision trees, random forests, gradient boosting, model validation, and communication of results, directly reflecting the modeling techniques used in P&C pricing and reserving practice.

The CAS strongly recommends candidates complete MAS-I, MAS-II, and Exam 5 (Basic Ratemaking and Estimating Claim Liabilities) before attempting PCPA, as the exam and project assume foundational statistical and actuarial knowledge from those prerequisites. The PCPA exam fee is $300 per attempt (with up to three attempts within one year), and the project fee is $700 per attempt, making the total investment significant and reinforcing the CAS's emphasis on demonstrable modeling competency.

For the profession broadly, these credentialing requirements represent an acknowledgment that predictive analytics is no longer a specialized niche; it is core to actuarial practice across every line of business. Candidates who invest in deep understanding of both the statistical theory and the practical application of these methods position themselves for the highest-demand roles in the market.

What This Means for Actuaries: Career Implications and Skills Demand

The integration of predictive analytics into underwriting has created pronounced shifts in actuarial career paths and skills demand. From tracking job postings and compensation trends over the past year, several patterns are clear.

First, the boundary between "traditional actuary" and "data scientist" continues to blur. Employers increasingly seek candidates who combine actuarial credentialing with proficiency in Python or R, SQL, cloud computing platforms (AWS, Azure, GCP), and version control systems like Git. The ability to build, validate, and deploy predictive models in production environments, not just analyze data in spreadsheets, has become a differentiating qualification.

Second, model governance and validation roles have expanded rapidly. As regulatory requirements intensify and the volume of deployed models grows, insurers need actuaries who can conduct independent model reviews, perform disparate impact testing, document model limitations, and ensure compliance with NAIC Model Bulletin requirements and state-specific AI laws. These roles demand a combination of statistical expertise, regulatory knowledge, and communication skills that is distinctly actuarial.

Third, the InsurTech ecosystem has created new career paths outside traditional carrier and consulting roles. Platforms like Gradient AI, Federato, Shift Technology, Zesty.ai, and others actively recruit actuaries who can bridge the gap between data science innovation and insurance domain knowledge. These roles often offer competitive compensation and exposure to cutting-edge technology while leveraging the actuarial skillset.

McKinsey's insurance AI report notes that carriers taking a domain-based approach to AI, organizing transformation around specific business functions like underwriting, claims, or distribution rather than pursuing horizontal technology projects, have seen the strongest results, with sales conversion improvements of 10-20% and premium growth of 10-15%. Actuaries who understand both the technical capabilities and the business context of predictive analytics are essential to executing this domain-based strategy.

For exam candidates, the message is clear: invest deeply in the predictive analytics requirements. Exam PA, ATPA, and PCPA are not just credentialing hurdles; they are direct preparation for the analytical work that defines modern actuarial practice. Candidates who supplement their exam preparation with hands-on projects, Kaggle competitions, or open-source contributions will find themselves particularly well-positioned.

Outlook: The Next Frontier of Actuarial Intelligence

The trajectory of predictive analytics in insurance underwriting points toward several developments that will shape actuarial practice over the next three to five years.

Agentic AI will move from pilot programs to production deployment, automating increasingly complex underwriting workflows while creating new governance and oversight challenges. The NAIC's exploration of model law authority for AI governance, along with individual state initiatives like Colorado's, will establish the regulatory frameworks within which this automation operates.

Explainable AI (XAI) will become a non-negotiable requirement, not just a nice-to-have. Methods like SHAP (SHapley Additive exPlanations), LIME (Local Interpretable Model-agnostic Explanations), and partial dependence plots will be standard components of model documentation, enabling actuaries to explain individual underwriting decisions to regulators, policyholders, and internal stakeholders.

The data available for underwriting will continue to expand exponentially. Connected vehicles, projected to represent 90% of new U.S. vehicle sales, will provide continuous driving behavior data. Smart home and building sensors will transform property underwriting. Satellite imagery and geospatial analytics will refine catastrophe exposure assessment. Wearable health devices will reshape life and disability underwriting. Each data source creates modeling opportunities and raises questions about privacy, consent, and fairness that actuaries must help navigate.

For the actuarial profession, predictive analytics is not a separate discipline to be mastered in isolation; it is the medium through which actuarial judgment is increasingly expressed, validated, and deployed. The actuaries who thrive in this environment will be those who combine deep statistical knowledge with domain expertise, regulatory awareness, and the communication skills to translate model outputs into business decisions. That combination, quantitative rigor informed by professional judgment, has always been the actuarial profession's distinctive value proposition. Predictive analytics simply provides more powerful tools to deliver on it.

Sources

  • McKinsey & Company, "The Future of AI in the Insurance Industry," July 2025 - mckinsey.com
  • McKinsey & Company, "AI in Insurance: Understanding the Implications for Investors," February 2026 - mckinsey.com
  • McKinsey & Company, "The Potential of Gen AI in Insurance: Six Traits of Frontrunners," 2025 - mckinsey.com
  • NAIC, "Insurance Topics: Artificial Intelligence," 2025 - naic.org
  • NAIC, "Big Data and Artificial Intelligence (H) Working Group," February 2026 - naic.org
  • NAIC, "Implementation of NAIC Model Bulletin: Use of Artificial Intelligence Systems by Insurers," 2025 - naic.org
  • Holland & Knight, "The Implications and Scope of the NAIC Model Bulletin on the Use of AI by Insurers," May 2025 - hklaw.com
  • Fenwick, "Tracking the Evolution of AI Insurance Regulation," December 2025 - fenwick.com
  • Colorado General Assembly, "SB24-205: Consumer Protections for Artificial Intelligence," 2024 - leg.colorado.gov