Insurance executives have never been more enthusiastic about artificial intelligence. Nearly 90% now identify AI as a top strategic initiative, and full AI adoption across insurance value chains jumped from 8% to 34% between 2024 and 2025 alone, according to the Roots Automation State of AI Adoption in Insurance 2025 survey. A separate Conning survey found that 90% of U.S. insurers were in some stage of generative AI evaluation by mid-2025, with 55% already in early or full adoption.
Yet here is the uncomfortable reality facing the profession: the governance frameworks, standards of practice, and regulatory guardrails that actuaries rely on were built for a world where model development took months and deployment required IT sign-off cycles. That world no longer exists. Actuaries today find themselves caught between C-suite demands for rapid AI integration and professional standards that, while philosophically sound, offer limited practical guidance for governing agentic AI systems, large language models, or real-time machine learning pipelines.
From tracking the evolution of AI governance discussions across the SOA, CAS, and the American Academy of Actuaries over the past year, a consistent pattern has emerged: the profession recognizes the problem, has begun publishing frameworks and discussion papers, but remains in the early stages of translating principles into enforceable, practical standards. Meanwhile, management teams are deploying AI into production environments at a pace that makes traditional actuarial review cycles feel like relics of a different era.
This article examines the specific dimensions of the governance gap, what existing standards actually require (and where they fall short), the regulatory landscape that is rapidly taking shape around insurers’ AI use, and practical steps actuaries can take to protect both consumers and their own professional standing.
The Speed Mismatch: AI Deployment vs. Standards Development
The velocity of AI adoption in insurance has no historical precedent. ChatGPT reached 100 million users within two months of its public launch, and generative AI achieved a 39.5% adoption rate within two years, a milestone that took the internet a decade to reach. The insurance industry has mirrored this acceleration. According to NAIC survey data compiled across multiple lines, AI and machine learning adoption rates are strikingly high: 92% of health insurers, 88% of auto insurers, 70% of home insurers, and 58% of life insurers report current or planned AI usage.
On the management side, the pressure metrics tell a clear story. The Roots survey found that 82% of insurance executives view AI as a strategic corporate initiative for improving financial and operational performance, with 44% saying competitors’ AI announcements directly influence their own strategies. Speed to ROI (43%) and speed to production (42%) rank as top implementation priorities. When C-suite leaders are measuring success in weeks and months, the traditional actuarial model development lifecycle, which often runs on quarterly or annual review cycles, creates an obvious friction point.
On the standards side, development timelines operate at institutional speed. ASOP No. 56 (Modeling), the primary actuarial standard governing model-related work, took roughly a decade from initial development to its effective date of October 1, 2020. The Actuarial Standards Board first began exploring a comprehensive modeling standard in the late 1990s, created formal task forces in 2012, released its first exposure draft in 2013, and voted to adopt the final standard in December 2019. That kind of deliberate, multi-year process is appropriate for setting durable professional standards. But it also means that ASOP No. 56 was finalized before GPT-3 was released and years before generative AI reshaped how models can be built, tested, and deployed.
What ASOP No. 56 Actually Requires (And Where It Gets Thin)
To understand the governance gap, it helps to examine what the existing professional framework actually says about AI governance.
ASOP No. 56 applies to all actuaries performing services related to designing, developing, selecting, modifying, or using models, across every practice area. Its scope is deliberately comprehensive: any system that processes inputs to produce outputs that have a material effect on intended users falls within its purview. AI systems, including machine learning models and generative AI tools, are models under this definition, and ASOP No. 56 applies.
The standard requires actuaries to evaluate a model’s appropriateness for its intended use, assess the quality of data, assumptions, and model structure, perform validation and testing, ensure appropriate model governance and controls, and disclose material limitations or known weaknesses.
These are sound principles. But as the American Academy of Actuaries’ September 2024 professionalism discussion paper on generative AI noted, applying ASOP No. 56 to modern AI raises several practical challenges:
Explainability and transparency. ASOP No. 56 expects actuaries to understand a model’s structure and assess its appropriateness. For traditional GLMs or even gradient-boosted models, this is feasible. For large language models with billions of parameters, the concept of “understanding model structure” takes on an entirely different meaning. As Sergey Filimonov observed at the 2025 CAS RPM Seminar, these models function as black boxes, which directly conflicts with the transparency valued in actuarial work.
Validation of non-deterministic outputs. ASOP No. 56 requires validation and testing to confirm a model “reasonably represents that which is intended to be modeled.” Generative AI outputs can vary across identical prompts. The Academy’s discussion paper explicitly cautioned that GenAI results are not always accurate and reproducible, making traditional validation approaches insufficient.
Scope of actuarial responsibility. When an actuary uses a third-party AI tool, such as a vendor’s ML-based claims triage system or an LLM integrated into a reserving workflow, ASOP No. 56 still holds the actuary responsible for model output. But the actuary may have limited visibility into the third-party model’s training data, architecture, or update schedule. The Academy paper emphasized that actuaries are responsible for actuarial services they provide, including decisions about whether and how to rely upon AI tools.
Speed of model evolution. AI models in production may update continuously through retraining on new data, or vendor models may change without notice. ASOP No. 56’s governance expectations were designed for models with discrete versions and periodic reviews, not for systems that evolve on rolling schedules.
The gap is not that ASOP No. 56 is wrong. It is that the standard provides high-level principles without the AI-specific practical guidance that actuaries need to apply those principles in a landscape that barely existed when the standard was finalized.
The Regulatory Patchwork: NAIC, States, and the EU
While actuarial standards evolve at institutional speed, insurance regulators have moved more aggressively, though unevenly, to address AI governance.
The NAIC Model Bulletin and State Adoption
The NAIC adopted its Model Bulletin on the Use of Artificial Intelligence Systems by Insurers in December 2023. The bulletin requires insurers to develop, implement, and maintain a written AI governance program (an “AIS Program”) covering responsible use of AI systems in regulated insurance practices. It emphasizes transparency, fairness, accountability, and risk management, and advises insurers that regulators may request documentation about AI governance during examinations.
As of early 2025, 24 states had adopted the NAIC Model Bulletin with limited modifications, and additional states had enacted related regulations. However, enforcement remains in its early stages. As Holland & Knight noted, there does not yet appear to be significant enforcement activity in states that have adopted the bulletin, though insurers should anticipate increasing regulatory oversight.
The NAIC’s Big Data and Artificial Intelligence (H) Working Group has continued developing an AI Systems Evaluation Tool, a set of questionnaires and checklists intended to standardize regulatory assessments of insurers’ AI governance. The working group held multiple meetings through early 2026 to refine this tool, with discussions continuing at the Spring 2026 National Meeting.
A critical finding from NAIC survey work deserves special attention: according to Fenwick’s analysis of the 2025 health insurance survey, nearly one-third of health insurers still do not regularly test their models for bias or discrimination, despite the Model Bulletin’s recommendation for such practices. This stat alone illustrates the governance gap in concrete terms.
The NAIC membership itself is divided on how far to go. At an October 2025 discussion reported by S&P Global, some commissioners pushed for a model law with stronger enforcement mechanisms, while others argued that the existing bulletin plus evaluation tool provides sufficient regulatory footing. Colorado Insurance Commissioner Michael Conway warned that if the NAIC does not fill the regulatory void, other bodies (state legislatures and federal agencies) will step in.
Adding another layer of complexity, the NAIC expressed deep concern in December 2025 over a federal executive order that could limit state regulatory authority over AI in insurance, potentially creating tension between state-level consumer protections and federal preemption.
Colorado’s AI Act: The Leading Edge
Colorado represents the most aggressive state-level approach to AI regulation affecting insurance. The Colorado AI Act (SB 24-205), signed in May 2024, explicitly covers insurance as a “consequential decision” domain. After delays, the law is set to take effect on June 30, 2026.
The law requires both developers and deployers of high-risk AI systems to use “reasonable care” to protect consumers from algorithmic discrimination. For insurers, this means conducting documented bias testing across protected classes, performing impact assessments, providing consumer disclosures when AI influences decisions, and establishing appeals processes with human review. Compliance with NIST’s AI Risk Management Framework or ISO/IEC 42001 creates an affirmative defense.
For actuaries working at carriers with Colorado exposure, the practical implications are substantial. Pricing models, underwriting algorithms, and claims triage systems that use AI will need documented governance artifacts that may go well beyond what current actuarial workflows produce.
The EU AI Act: Global Implications
The EU AI Act explicitly classifies life and health insurance risk assessment and pricing as high-risk AI applications. When high-risk system requirements take full effect on August 2, 2026, providers and deployers must comply with detailed obligations around risk management, data governance, transparency, human oversight, and conformity assessment. Penalties can reach up to 35 million euros or 7% of global revenue.
As Milliman’s analysis noted, the Act’s broad definition of AI means that even more traditional actuarial models could fall within scope depending on their use and impact. Forvis Mazars has specifically highlighted the emerging role of the “compliance actuary” who bridges traditional actuarial expertise with AI governance responsibilities.
Notably, AI literacy requirements under the EU AI Act have already been in effect since February 2025, requiring organizations to ensure staff involved with AI systems have sufficient understanding of AI capabilities, limitations, and risks.
Where the Gap Is Most Dangerous
From observing how these pressures intersect in practice, several specific scenarios represent the highest governance risk for actuaries today:
Vendor “black-box” models with actuarial sign-off. When carriers adopt third-party AI tools for pricing or reserving, actuaries may be asked to validate or sign off on model outputs without meaningful access to model internals. ASOP No. 56 still holds the actuary responsible, but the profession lacks specific guidance on what “sufficient validation” looks like when the model vendor treats its architecture as proprietary. The NAIC’s Third-Party Data and Models (H) Working Group is developing a regulatory framework for third-party AI, with a model law anticipated in 2026.
Generative AI in actuarial communications. Actuaries increasingly use LLMs to draft memos, summarize data, or accelerate report writing. The Academy’s professionalism discussion paper was explicit: an actuary following the ASOPs cannot use a GenAI result without validation and simply say “that’s what the model told me.” But practical guidance on what validation of LLM-generated actuarial content looks like remains scarce.
Continuous learning models without governance cycles. Traditional model governance assumes periodic review and approval cycles. Models that retrain on streaming data do not fit this paradigm. If a pricing model’s behavior shifts between quarterly reviews because of automated retraining, the governance framework may not catch it.
Management pressure to skip governance. The hyperexponential 2025 State of Pricing report found that 99% of insurers struggle to get their technology tools working as hoped, but expectations keep rising. When AI adoption becomes a competitive imperative, with 44% of executives citing competitor activity as a driver, governance processes that slow deployment face organizational headwinds. The 38% of actuaries who cited “lack of underwriter or business buy-in” as a barrier to deploying pricing models suggests that professional caution is sometimes viewed as an obstacle rather than a safeguard.
What the Profession Is Doing About It
Despite the gap, the actuarial profession has not been idle. Several significant initiatives are underway:
IAA Artificial Intelligence Task Force. The International Actuarial Association completed its first phase of AI-related deliverables in 2024, covering professionalism, education, the changing role of actuaries, governance, and innovation across five workstreams. The second phase, running through 2025 and 2026, focuses on helping actuaries worldwide become “fully AI-enabled,” with the IAA explicitly stating that AI-enabled actuaries will eventually replace actuaries without AI capability.
Academy Professionalism Resources. The Academy’s September 2024 discussion paper on generative AI professionalism provided the most comprehensive existing guidance on applying ASOPs to AI. A March 2026 article in the Academy’s Contingencies magazine, “Actuarial & Algorithmic Accountability,” argued that actuaries are uniquely positioned to lead ethical AI governance in insurance, given the profession’s existing grounding in fairness, transparency, and accountability.
SOA Research and Education. The SOA’s AI Working Group published an expert panel discussion on AI risk management frameworks in 2025, focusing on the NIST AI RMF and its applicability to actuarial contexts. The SOA also hosted a December 2025 webcast on AI governance and actuarial professionalism covering bias, explainability, hallucinations, model drift, and a practical governance framework aligned with the Actuarial Code of Conduct. The SOA published a comprehensive January 2026 article on navigating AI transformation in actuarial science, calling for governance frameworks that safeguard trust and accountability.
CAS AI Fast Track and Community. The CAS and iCAS developed the AI Fast Track program, with Max Martinelli, who co-designed the program, emphasizing that actuarial domain knowledge, not just technical skill, is key to responsible AI integration. The capstone session, “Mind, Model, Morality,” explicitly addresses bias, judgment, and governance in the context of actuarial standards.
NIST AI RMF as a Common Language. The NIST AI Risk Management Framework has emerged as a practical bridge between actuarial governance and broader organizational AI oversight. Its core functions (Govern, Map, Measure, Manage) parallel the actuarial control cycle, giving actuaries a common vocabulary to engage with data scientists, compliance teams, and executives. As one expert panelist in the SOA’s AI risk management research stated, the most important takeaway is that AI safety is not just about engineering but about collaboration, ethics, and context.
Practical Steps for Actuaries Navigating the Gap
Until standards and regulations fully catch up to AI deployment reality, practicing actuaries can take concrete steps to manage their professional exposure and protect consumers:
- Document everything, even when not explicitly required. Create written records of AI governance decisions: why a model was selected, what validation was performed, what limitations were identified, and what was communicated to intended users. If an AI-related issue later arises, documentation is the actuary’s primary defense.
- Engage with NIST AI RMF proactively. Familiarizing yourself with the NIST framework is one of the highest-return investments an actuary can make in 2026. It provides structure that current ASOPs lack, and it aligns with both the Colorado AI Act’s safe harbor provisions and the EU AI Act’s recognized standards. Start with the executive summary, which is short and actionable.
- Push for model inventory and classification. Many organizations lack a basic inventory of their AI systems in production. Volunteering to lead or participate in an AI model inventory initiative positions actuaries as governance leaders while addressing a fundamental compliance requirement under the NAIC Model Bulletin and state-level regulations.
- Define validation standards for third-party AI. When your organization adopts vendor AI tools, work to establish contractual requirements for model documentation, bias testing results, and notification of material model changes. The NAIC’s forthcoming third-party model oversight framework, expected in 2026, will likely formalize these expectations.
- Stay current with evolving standards. Monitor the ASB for any AI-specific guidance or updates to ASOP No. 56, follow the Academy’s professionalism resources, and track NAIC working group developments. The regulatory and professional landscape is shifting rapidly enough that quarterly check-ins may not be sufficient.
- Build cross-functional relationships. AI governance is inherently interdisciplinary. As the NAIC Model Bulletin and multiple professional sources emphasize, effective governance requires collaboration across actuarial, data science, compliance, legal, and executive functions. Actuaries who build these relationships are better positioned to influence how AI is deployed.
Looking Ahead: Will the Gap Close?
The trajectory of 2026 suggests several inflection points. Colorado’s AI Act enforcement begins June 30. The EU AI Act’s high-risk system requirements take full effect August 2. The NAIC continues developing its AI Systems Evaluation Tool and debating whether to advance a model law. And actuarial organizations are accelerating their guidance production.
From monitoring these parallel tracks, the gap will narrow but likely not close in 2026. The profession’s governance frameworks need to evolve from principle-based aspirations to operational playbooks, which is work that takes time and practitioner input. But the direction is clear. As CAS Past President Dave Cummings stated in his 2025 presidential address, the actuarial profession is one that collectively advances its knowledge and instills ethics to guide through times of change and innovation.
For individual actuaries, the most important insight may be this: waiting for perfect guidance is itself a governance failure. The profession’s existing principles, including competence, integrity, transparency, and public protection, provide sufficient ethical foundation. What is needed is the professional courage to apply those principles in new contexts, even when specific guidance has not yet caught up.
The AI governance gap is real. But so is the actuarial profession’s capacity to close it, one documented decision, one validated model, and one principled conversation with management at a time.
Sources
- Roots Automation, “State of AI Adoption in Insurance 2025” - Survey data on executive AI priorities, adoption stages, and implementation barriers.
- Conning, “AI in Insurance: The C-Suite Verdict” (2025) - 90% of respondents in some stage of GenAI evaluation; 55% in early or full adoption.
- NAIC, Insurance Topics: Artificial Intelligence - AI adoption survey data across insurance lines and regulatory context.
- NAIC Model Bulletin: Use of Artificial Intelligence Systems by Insurers (December 2023) - Full text of the model bulletin establishing insurer AI governance expectations.
- Quarles & Brady, “Nearly Half of States Have Now Adopted NAIC Model Bulletin” (March 2025) - 24-state adoption tracker.
- Holland & Knight, “The Implications and Scope of the NAIC Model Bulletin” (May 2025) - Analysis of enforcement status and compliance expectations.
- Fenwick, “Tracking the Evolution of AI Insurance Regulation” (December 2025) - NAIC survey findings, third-party model oversight, and 2026 regulatory outlook.
- S&P Global, “NAIC Membership Divided on Developing AI Model Law” (October 2025) - Commissioner debate on model law vs. bulletin approach.
- NAIC Statement on AI Executive Order (December 2025) - Concerns over federal preemption of state AI regulation.
- Actuarial Standards Board, ASOP No. 56: Modeling - Full text and development history.
- American Academy of Actuaries, “Actuarial Professionalism Considerations for Generative AI” (September 2024) - Discussion paper on applying ASOPs to AI tools.
- American Academy of Actuaries, “Actuarial & Algorithmic Accountability” (March 2026) - Article on actuaries’ role in ethical AI governance.
- SOA, “Navigating the AI Transformation in Actuarial Science” by Carlos Arocha (January 2026) - Comprehensive overview of AI’s impact on actuarial practice and governance needs.
- SOA, “AI Risk Management Frameworks: An Expert Panel Discussion” (2025) - Expert insights on NIST AI RMF and actuarial applications.
- SOA, “Future Days: AI Governance & Actuarial Professionalism” Webcast (December 2025) - Governance risks, bias, explainability, and practical frameworks.
- CAS Actuarial Review, “The AI Moment in Insurance” (September 2025) - CAS perspective on AI adoption, Fast Track program, and professional challenges.
- Baker Botts, “Colorado AI Act Implementation Delayed” (September 2025) - Colorado SB 24-205 timeline, obligations, and NIST safe harbor.
- European Commission, AI Act Overview - Official EU AI Act framework and implementation timeline.
- Milliman, “The AI Act’s Impact on Insurance” - Analysis of EU AI Act implications for actuarial models and insurance.
- Forvis Mazars, “The Impact of the EU AI Act on the Sector and the Emerging Role of the Compliance Actuary” - Discussion of actuaries’ evolving compliance role under the AI Act.
- International Actuarial Association, Artificial Intelligence Task Force - IAA AI workstreams and 2025-2026 phase two objectives.
- hyperexponential / Carrier Management, “Underwriter, Actuary Fears of AI Drop” (December 2025) - 2025 State of Pricing survey data on AI adoption barriers and collaboration gaps.
- The Actuary (IFoA), “Check Your AI: A Framework for Its Use in Actuarial Practice” (June 2025) - Ethical AI lifecycle framework integrated with the actuarial control cycle.
- SOA AI Bulletin, “NIST AI RMF and Actuarial Practice” (July 2025) - Interview on NIST framework applicability and bias in AI systems.
- Kennedys Law, “Understanding the NAIC Model AI Bulletin” (January 2025) - Legal analysis of AIS Program requirements and governance structure.
Further Reading on actuary.info
- AI in Insurance Underwriting 2026 - How carriers are deploying AI in underwriting and the adoption trends driving change.
- NAIC Regulatory Developments 2026 - Broader regulatory landscape including model laws, climate risk disclosure, and AI governance.
- AI in Actuarial Science - Comprehensive look at how AI is transforming actuarial workflows, from pricing to reserving.
- Predictive Analytics in Underwriting - Technical overview of ML models used in underwriting and their governance requirements.
- CAS Exam Pathway 2026 - The CAS is embedding data science and analytics competency into its credentialing, reflecting the profession’s AI trajectory.
- ASOPs 2026 Update - Current status of actuarial standards of practice and recent changes.
Stay ahead with daily actuarial intelligence - news, analysis, and career insights delivered free.
Subscribe to Actuary Brew Browse All Insights