If you asked a room of insurance CISOs whether they were on top of AI security, most would say yes. According to the The Agentic Ecosystem Security Gap: 2026 CISO Report a survey of 500 U.S. security leaders, 82.3% of insurance CISOs claim at least some confidence in their ability to detect suspicious behavior affecting sensitive data across their SaaS and AI ecosystem. Most say they have data flow mapping. Most say they have incident response coverage.
The underlying data tells a more complicated story.
Only 22.6% of insurance CISOs claim comprehensive real-time OAuth token governance — 11.8 percentage points below the cross-industry average of 34.4% and lower than every other major vertical in the survey. Only 25.8% claim comprehensive incident response coverage for their SaaS and AI ecosystem, about 32% below the cross-industry rate. And when asked across 11 specific security capabilities, insurance CISOs cited limitations in their current tools across every single one.
The insurance sector is not unprotected. It may be looking in the wrong direction.
The financial services divergence
The sharpest signal in the insurance data is not an absolute number. It is a comparison.
Financial services and insurance operate under overlapping regulatory frameworks. Both handle sensitive personal and financial data. Both are subject to the NAIC Insurance Data Security Model Law in most states, and both face state regulatory examination authority over their third-party and vendor risk programs. For carriers with EU operations, both fall under DORA. The regulatory exposure is, for practical purposes, comparable.
The risk perception is not.
In financial services, 45.3% of CISOs characterize AI agents as a critical security risk, the highest figure of any industry and 44% above the cross-industry rate. In insurance, only 17.7% do. That is a gap of nearly 27 percentage points between two sectors whose regulatory and data risk profiles are substantially similar.
Vorlon, The Agentic Ecosystem Security Gap: 2026 CISO Report
There are two ways to read that divergence. The first is that insurance CISOs have made a considered judgment that AI agent risk is less acute in their environments. The second is that lower measured concern is a downstream consequence of lower visibility: you cannot be alarmed by a threat you cannot see clearly. The OAuth governance data and the incident response coverage data both point toward the second explanation.
The OAuth governance gap
OAuth tokens are the authentication mechanism through which AI agents access SaaS systems. When an AI agent is authorized to access a policyholder management platform, an underwriting system, or a claims processing tool, that access is issued as an OAuth token — persistent, often broadly scoped, and operating in the background without re-authentication.
This is the credential layer that the agentic ecosystem runs on. And insurance has the weakest governance of it of any sector in the survey.
Only 22.6% of insurance CISOs claim comprehensive real-time OAuth token governance — 34% below the cross-industry average of 34.4%. A further 19.4% report only limited or basic visibility, double the overall rate of 9.8%. Another 3.2% report no active OAuth governance at all, four times the cross-industry rate of 0.8%.
These organizations have AI agents operating with persistent access to policyholder data through credentials their security teams are not monitoring.
An insurance organization without comprehensive OAuth governance cannot know which AI integrations have persistent access to policyholder data. It cannot detect when a token has been compromised. It cannot revoke access in response to an incident. It cannot reconstruct, after the fact, what a compromised agent accessed or moved.
Under NAIC's model law, that is not just a security gap. It is a risk assessment gap: a failure to identify and document the material risks associated with third-party integrations that have access to sensitive consumer data.
Supply chain risk: The gap between concern and capability
35.5% of insurance CISOs call supply chain breach a top priority risk, 11 percentage points below the cross-industry average of 46.6% and well below the 58.5% figure in financial services. But even at that lower level of expressed concern, the capability data shows the concern is not matched by readiness.
Only 25.8% of insurance CISOs claim comprehensive incident response coverage for their SaaS and AI ecosystem. Nearly three out of four insurance organizations — organizations that acknowledge supply chain breach as at least an important risk — do not have comprehensive IR capability for the vector through which supply chain attacks most commonly execute.
Nearly three out of four insurance CISOs lack comprehensive incident response coverage for their SaaS and AI ecosystem, the primary surface through which supply chain attacks execute.
Vorlon, The Agentic Ecosystem Security Gap: 2026 CISO Report
The supply chain attack pattern that has defined the threat landscape over the past two years does not announce itself. It shows up as anomalous agent behavior, unusual OAuth token activity, or unexpected data movement between integrated systems. Without runtime visibility into those signals, the first indication of a supply chain breach is often the vendor's breach notification, by which point the attacker has already had access to policyholder data for days, weeks, or longer.
What the NAIC update means
NAIC's Insurance Data Security Model Law, adopted in 24 states, requires insurers to maintain a comprehensive information security program, conduct risk assessments of third-party service providers, and maintain a written incident response plan. The model law is actively being updated to address AI governance, and the direction of those updates maps directly to the gaps the survey surfaces.
The core obligation under the model law is not to have perfect security. It is to have a documented, risk-based program that identifies and addresses material risks. For insurers operating AI agents that access policyholder data through OAuth-connected SaaS integrations, the material risks include:
- The persistent access that OAuth tokens grant to connected systems
- The inability to detect anomalous agent behavior at the data layer
- The absence of audit trails sufficient to support post-incident investigation
- The gap between third-party vendor inventory and actual monitoring of those vendors' data access
The survey found that insurance CISOs are reporting limitations in their current tools across all 11 security capabilities evaluated. That pattern of broad, cross-capability limitation is not the profile of an organization that can demonstrate to a state insurance examiner that it has identified and addressed the material risks of its AI and SaaS environment.
State insurance regulators are expanding their third-party and vendor risk examination focus. The model law update will make AI governance expectations more explicit. For EU-exposed carriers, DORA's requirements, a register of all ICT third-party providers and mandatory incident reporting within 72 hours, are already in effect.
The NAIC model law requires insurers to identify and address material risks. An insurer that cannot see what its AI agents are doing with policyholder data has not identified those risks, regardless of what its information security policy document says.
What this means for insurance security teams
The insurance sector's lower AI risk perception relative to financial services may prove to be a temporary condition. As NAIC updates its model law, as state regulators increase examination focus on AI governance, and as supply chain attacks continue to execute through the SaaS integration layer, the gap between where insurance security programs are and where regulatory expectations are heading will become more visible.
The organizations best positioned are those that close the OAuth governance gap now, before an incident requires them to reconstruct agent activity they were not tracking. That means comprehensive real-time visibility into which OAuth tokens are active, what they can access, and whether they are being used consistently with their intended scope. It means IR playbooks that specifically cover AI agent incidents and SaaS supply chain events. And it means a third-party risk program that treats AI integrations as what they are: persistent, often broadly authorized, and operating at machine speed.
The financial services sector has been grappling with this problem longer. Our companion analysis from the same survey offers a view of where that sector is, and what the insurance sector can learn from it.
→ Read the full 2026 CISO Report
About Vorlon
Vorlon is the Agentic Ecosystem Security Platform built for enterprises where AI agents, SaaS applications, and third-party integrations are already handling sensitive policyholder data. Vorlon's patented DataMatrix™ technology builds a live model of how sensitive data, identities, and integrations interact across your environment, giving security teams the runtime visibility, OAuth governance, and forensic audit trail capabilities needed to detect threats, assess blast radius, and reconstruct exactly what happened across every connected system.
For insurance organizations navigating NAIC model law requirements, state regulatory examination, and the growing complexity of the agentic ecosystem, Vorlon closes the visibility gap that legacy tools were not built to address.
Frequently Asked Questions
What is the NAIC Insurance Data Security Model Law and how does it apply to AI?
The NAIC Insurance Data Security Model Law establishes requirements for insurance licensees to develop and maintain a comprehensive information security program, conduct risk assessments of third-party service providers, and maintain written incident response plans. As of 2026, the model law has been adopted in 24 states and is being updated to explicitly address AI governance. Insurers that cannot demonstrate oversight of their AI tool ecosystem will face increasing examination scrutiny as the updated model law takes effect.
What is an OAuth token and why does it create compliance risk for insurance companies?
An OAuth token is an authorization credential that allows one application, including AI agents, to access another application's data on behalf of a user or organization. In insurance environments, OAuth tokens are commonly used by AI tools to access policyholder management systems, claims platforms, underwriting databases, and other sensitive data stores. These tokens grant persistent access that does not require re-authentication, making them attractive targets and difficult to govern without purpose-built tooling. The 2026 CISO Report found that only 22.6% of insurance CISOs have comprehensive real-time OAuth governance, the lowest of any major vertical in the survey.
How are state insurance regulators approaching third-party and AI vendor risk?
State insurance regulators are expanding their market conduct examination focus to include third-party technology vendor risk and, increasingly, AI governance. Examination criteria are evolving to assess whether insurers have documented risk assessments of their SaaS and AI vendors, whether those assessments cover data access and integration risks, and whether insurers maintain incident response capabilities sufficient to respond to third-party breach events. Insurers should expect AI tool governance, including OAuth token oversight, shadow AI discovery, and AI incident response planning, to become standard examination topics within the next regulatory cycle.
What does DORA require for insurance carriers with EU operations?
The Digital Operational Resilience Act, which went live in January 2025, applies to financial entities operating within the EU, including insurance companies. DORA requires covered entities to maintain a register of all ICT third-party providers, detect and report major ICT-related incidents within strict timelines, and maintain audit logs sufficient to support incident investigation. For insurance carriers with EU operations, DORA's requirements map directly to the capabilities the 2026 CISO Report identifies as gaps across the insurance sector.
What incident response capabilities do insurance organizations need for AI-related breaches?
Insurance organizations need IR capabilities that cover the specific characteristics of agentic ecosystem incidents: the ability to detect anomalous AI agent behavior at the data layer; a mechanism to quickly identify which OAuth tokens were active and what data they had access to at the time of an incident; a process for calculating blast radius across connected SaaS systems; a forensic audit trail of agent actions; and a clearly defined ownership model for SaaS and AI incident assessment. The 2026 CISO Report found that only 25.8% of insurance CISOs claim comprehensive IR coverage, about 32% below the cross-industry rate.
Why do insurance CISOs show lower AI security concern than financial services CISOs?
The 2026 CISO Report found a gap of nearly 27 percentage points between financial services CISOs (45.3%) and insurance CISOs (17.7%) who characterize AI agents as a critical security risk. One explanation is genuinely different AI agent deployment profiles. Another, supported by the OAuth governance and IR coverage data, is that insurance CISOs have lower visibility into their AI agent ecosystem than their financial services counterparts, and that lower visibility is producing lower measured concern. The OAuth governance gap, the double rate of limited OAuth visibility, and the below-average IR coverage all point to a sector that may be underestimating a risk it is not fully equipped to detect.
All data: The Agentic Ecosystem Security Gap: 2026 CISO Report. Conducted by Consensuswide, January 27 – February 9, 2026. n=500 U.S. CISOs. Vertical subsets: n=106 Financial Services U.S. CISOs; n=62 Insurance U.S. CISOs; n=52 Healthcare and Life Sciences U.S. CISOs.



