Healthcare organizations face a problem that most other industries do not: the consequences of getting security wrong are not just financial and reputational. They are human. Delayed care, exposed mental health records, compromised prescription data — the downstream effects of a healthcare data breach extend far beyond the organization that experienced it.

That context matters when reading the findings from The Agentic Ecosystem Security Gap: 2026 CISO Report, a survey of 500 U.S. security leaders, because the healthcare data is, across nearly every dimension, the most concerning of any sector surveyed.

Healthcare CISOs have the lowest confidence in understanding what their AI tools are doing with sensitive data of any vertical in the survey. They are experiencing AI agent incidents at above-average rates. They have the weakest incident response coverage for SaaS and AI ecosystems. And they are the most conservative sector on security investment, despite facing the highest data breach costs of any industry, averaging $10.9 million per incident in 2025, more than double the cross-industry average.

The sector with the most to lose is moving the most slowly to close the gap.

The visibility deficit

Start with the most fundamental question a healthcare CISO can be asked about their AI security posture: do you know what your AI tools can access?

For Microsoft Copilot, a tool many healthcare organizations have actively deployed for clinical documentation, administrative automation, and communications, only 63.5% of healthcare CISOs report confidence that they know what data it can access. The cross-industry average is 83.2%. That is a gap of 19.7 percentage points on a named, sanctioned enterprise tool. For Google Gemini, the gap is 13.6 percentage points: 71.2% vs. 84.8% overall. For ChatGPT, 13.2 points: 69.2% vs. 82.4% overall.

purple-quotes 1

Only 63.5% of healthcare CISOs are confident they know what data Microsoft Copilot can access in their environment — 19.7 percentage points below the cross-industry average and the lowest figure of any vertical in the survey. 

Vorlon, The Agentic Ecosystem Security Gap: 2026 CISO Report

 

This pattern is consistent across every named AI platform in the survey. Healthcare CISOs know less about what their sanctioned AI tools are doing with sensitive data than peers in every other sector. Not on one platform. On all of them.

This is not a shadow AI problem. It is a visibility problem with tools that have been deliberately deployed. If healthcare organizations cannot confidently describe what named, approved AI tools can access, the picture for shadow AI and employee-connected tools outside IT visibility is necessarily worse.

The incident rate paradox

Here is what makes that visibility deficit so alarming: healthcare is not experiencing fewer AI-related incidents as a result of lower AI deployment. It is experiencing more.

36.5% of healthcare CISOs reported suspicious activity involving AI agents in 2025, the highest of any vertical in the survey and 20% above the cross-industry average of 30.4%. Unauthorized SaaS-to-AI data exfiltration was reported by 34.6%, also above the cross-industry average of 30.8%.

Healthcare has both the lowest AI tool visibility confidence and the highest AI agent incident rate of any vertical surveyed. The sector is being hit more often by the incidents it can see least clearly.

Organizations that cannot clearly see what their AI tools are doing with sensitive data are less likely to detect suspicious behavior early, less likely to recognize an incident in progress, and less likely to respond before significant data has moved. The result is not just more incidents. It is incidents that are harder to contain, more expensive to remediate, and more complex to report under regulatory requirements that have specific timelines.

What HIPAA actually requires in the age of AI agents

HIPAA was enacted in 1996. The Security Rule was published in 2003. Neither was written with AI agents in mind, but both apply to them.

Under HIPAA, any unauthorized access to protected health information, regardless of how it occurs, is a potential breach triggering notification obligations. An AI agent that moves PHI to an unauthorized destination through a SaaS integration is a potential HIPAA breach. An OAuth token compromised by an attacker and used to access patient records through a connected application is a potential HIPAA breach. The mechanism does not change the obligation.

The proposed updates to the HIPAA Security Rule, published in the Federal Register in January 2025 and moving toward final rulemaking, make the expectations around AI and SaaS data flows considerably more explicit:

  • Technology asset inventory: Covered entities must maintain a current, accurate inventory of all electronic information systems, including third-party applications and AI tools that access or process PHI.
  • Data flow monitoring: The updated rule requires monitoring of data flows between covered entities and their business associates, including technology integrations. An AI tool connected to an EHR via API integration that handles PHI is a business associate relationship requiring a signed BAA and documented security monitoring.
  • Audit controls: The rule strengthens requirements for audit logging sufficient to support post-incident investigation. Organizations must be able to reconstruct what happened to PHI in the event of a suspected breach.
  • Incident response planning: IR plans must be capable of addressing the categories of incidents that covered entities are actually experiencing — a standard that increasingly includes AI agent incidents and SaaS supply chain events.

 The proposed HIPAA Security Rule update requires covered entities to maintain a current inventory of AI tools accessing PHI, monitor data flows to business associates, and maintain audit logs sufficient to support incident investigation. The Agentic Ecosystem Security Gap: 2026 CISO Report found that 73% of healthcare CISOs lack comprehensive incident response coverage for precisely these scenarios. 

The final rule is expected in 2025, with a 180-day compliance period for most covered entities. Healthcare organizations that have not extended their security architecture to cover AI and SaaS data flows will face a compliance gap on a timeline that does not leave room for gradual remediation.

What HHS OCR is actually looking for

The Change Healthcare breach of February 2024 was the largest healthcare data breach in U.S. history, affecting an estimated 190 million Americans and costing UnitedHealth Group more than $1.6 billion. HHS OCR opened an investigation. Congressional hearings followed. The enforcement posture shifted.

What is particularly relevant for healthcare CISOs is what OCR focused on: not whether UnitedHealth had a security policy, but whether it had adequate oversight of the data flows between its systems and its business associates. The third-party and integration layer was the focal point.

That focus is not unique to Change Healthcare. HHS OCR settled with a major health system for $1.19 million following a breach that originated through a third-party SaaS vendor, specifically citing failures to assess the risk of the vendor relationship and monitor the data flows between connected systems.

HHS OCR's $1.19M settlement following a third-party SaaS vendor breach cited specifically the failure to assess vendor risk and monitor data flows — the exact capabilities where healthcare CISOs report the widest gaps in the 2026 CISO survey.

The survey found that 34.6% of healthcare CISOs experienced unauthorized SaaS-to-AI data exfiltration in 2025. Under HIPAA, each of those incidents is a potential breach notification event and, given OCR's current posture, a potential investigation trigger. The question is not whether OCR will scrutinize healthcare organizations for third-party AI and SaaS incidents. It is whether those organizations can demonstrate what happened, to which data, through which systems, and what they did about it.

Most currently cannot. The survey found that only 26.9% of healthcare CISOs claim comprehensive incident response coverage for their SaaS and AI ecosystem.

The HHS cybersecurity performance goals

In 2024, HHS published Cybersecurity Performance Goals for the healthcare sector, updated in 2025, with explicit goals for SaaS security, third-party risk management, API monitoring, and incident response.

The CPGs are not yet mandatory. But HHS has signaled that CPG alignment will factor into enforcement decisions, and that organizations demonstrating alignment will be treated differently during OCR investigations. For healthcare CISOs, the CPGs are effectively a compliance roadmap: the gap between where their program is and where CPG alignment requires it to be is the same gap that creates enforcement exposure.

The 2026 CISO Report data maps directly to CPG requirements. The survey found that healthcare CISOs are reporting limitations in their current tools across all 11 security capabilities evaluated, including every capability the CPGs identify as essential.

The investment paradox

Here is the number that may be the most consequential in the entire healthcare dataset: 0%.

Not one healthcare CISO surveyed plans to increase their SaaS security budget significantly — by more than 25% — in 2026. Compare that to 7.4% overall and 14.2% in financial services. Half of healthcare CISOs plan only slight increases of less than 10%, compared to 37% overall.

Key Finding

Not one healthcare CISO plans a significant SaaS security budget increase in 2026, despite facing the highest breach costs, the highest AI agent incident rate, and the lowest AI tool visibility of any sector in the survey. 

There are organizational explanations for this. Healthcare operates on thin margins. Security budgets compete with clinical technology investment in ways that do not have equivalents in financial services. Board conversations about security investment are harder to win when the connection between security spending and patient outcomes is not always direct.

But the data creates a picture that should concern healthcare security leaders, compliance officers, and the boards they report to: the sector most exposed to the financial, regulatory, and human consequences of an AI-mediated data breach is allocating the least to close the gaps that make those consequences likely.

The updated HIPAA Security Rule, HHS CPG expectations, and OCR's new enforcement posture will make that investment gap harder to sustain.

What healthcare CISOs should do now

The HIPAA update's 180-day compliance window, combined with OCR's clear signal about third-party and AI integration oversight, creates a specific and near-term action agenda.

  • Inventory first. The updated HIPAA rule requires a current technology asset inventory covering all systems that access or process PHI, including AI tools and SaaS integrations. You cannot govern what you have not found.
  • Extend IR playbooks to the agentic ecosystem. HIPAA's 60-day breach notification requirement applies to AI-mediated PHI incidents. An IR capability that covers endpoint and network events but not AI agent incidents or SaaS integration compromises cannot meet HIPAA's requirements for the category of incidents healthcare is now experiencing most frequently. The survey found that only 23.1% of healthcare CISOs have comprehensive threat hunting and investigation coverage for their SaaS and AI ecosystem, nearly half the cross-industry average of 44%.
  • Treat AI tool governance as a BAA question. Every AI tool that accesses PHI, directly or through a SaaS integration, is a business associate relationship. BAA coverage should be confirmed for all AI tools in the inventory. For tools without BAA coverage, that is a compliance gap requiring remediation.
  • Use the HHS CPGs as a gap assessment framework. The CPGs provide a structured, HHS-endorsed framework for evaluating healthcare cybersecurity program maturity. Using them identifies the highest-priority remediation areas and creates documentation of a structured, risk-based approach that OCR values during investigations.

For a view of how adjacent sectors are addressing the same underlying architecture problem, our analyses of financial services and insurance data from the same survey offer useful comparison points.

Read the full 2026 CISO Report


About Vorlon

Vorlon is the Agentic Ecosystem Security Platform built for enterprises where AI agents, SaaS applications, and third-party integrations are already handling sensitive patient and health data. Vorlon's patented DataMatrix™ technology builds a live model of how sensitive data, identities, and integrations interact across your environment, giving healthcare security teams the runtime visibility, forensic audit trail, and coordinated response capabilities needed to detect AI-mediated PHI incidents, assess blast radius, and reconstruct exactly what happened across every connected system.

For healthcare organizations navigating the updated HIPAA Security Rule, HHS Cybersecurity Performance Goals, and OCR's new enforcement posture on third-party and AI integration oversight, Vorlon provides the audit trail and cross-system visibility that legacy tools were not built to deliver. Learn more about Vorlon for healthcare.

See how Vorlon works


Frequently asked questions

Does HIPAA apply to AI agents that access or process protected health information?
Yes. HIPAA's Security Rule applies to all electronic protected health information, regardless of how it is accessed or processed. An AI agent that accesses ePHI, whether directly or through a SaaS integration, is subject to HIPAA's security requirements. If the AI tool is operated by a third party, it is a business associate relationship requiring a Business Associate Agreement. If an AI agent moves PHI to an unauthorized destination through a SaaS or API integration, that is a potential HIPAA breach triggering notification assessment and, depending on scope, notification obligations to affected individuals, HHS, and potentially the media.

What are the updated HIPAA Security Rule requirements for AI and SaaS data flows?
The proposed HIPAA Security Rule updates, published in the Federal Register in January 2025, require covered entities to maintain a current, accurate technology asset inventory covering all systems that access or process ePHI, including third-party AI tools and SaaS integrations. The updated rule tightens business associate agreement requirements and explicitly addresses the monitoring of data flows between covered entities and their technology partners. Audit control requirements are strengthened to ensure organizations can reconstruct what happened to ePHI during a suspected breach. The final rule is expected in 2025, with a 180-day compliance period.

What are the HHS Cybersecurity Performance Goals and are they mandatory?
The HHS Cybersecurity Performance Goals are a set of healthcare cybersecurity benchmarks published by HHS in 2024 and updated in 2025. They are currently voluntary, but HHS has signaled that CPG alignment will be considered during OCR investigations, meaning organizations that can demonstrate alignment may receive more favorable treatment during enforcement proceedings. The CPGs cover SaaS security, API monitoring, third-party risk management, and incident response for cloud and SaaS environments. Healthcare organizations that use the CPGs as a gap assessment framework will find that closing the agentic ecosystem security gap and achieving CPG alignment are largely the same project.

What does the Change Healthcare breach mean for healthcare AI security strategy?
The Change Healthcare breach shifted HHS OCR's enforcement posture toward a focus on third-party integration oversight rather than just perimeter security. OCR's investigation focused specifically on the adequacy of oversight of data flows between connected systems. Healthcare CISOs whose organizations cannot demonstrate runtime monitoring of SaaS integrations and AI tool data flows are in a weaker position during an OCR investigation than those who can.

What is a Business Associate Agreement and does it cover AI tools?
A Business Associate Agreement is a contract required by HIPAA between a covered entity and any third party that creates, receives, maintains, or transmits PHI on behalf of the covered entity. AI tools that access PHI, including AI agents connected to EHR systems, scheduling platforms, billing systems, or any other application containing patient data, require BAA coverage if operated by third parties. Many healthcare organizations have deployed AI tools without confirming BAA coverage, creating a direct HIPAA compliance gap. See our healthcare security risk guide for more detail.

Why does healthcare have the highest data breach cost of any industry?
According to IBM's 2025 Cost of a Data Breach Report, the average cost of a healthcare data breach reached $10.9 million, more than double the cross-industry average and the highest of any industry for the fourteenth consecutive year. HIPAA's mandatory breach notification requirements create compliance costs that do not exist in all other sectors. AI oversight gaps are increasingly contributing to extended breach detection and containment timelines. The 2026 CISO Report findings — lowest AI tool visibility, highest AI agent incident rate, weakest IR coverage — suggest the underlying conditions driving high breach costs in healthcare are not yet being systematically addressed.

How should healthcare compliance and security teams work together on AI governance?
Three areas require joint ownership. First, BAA coverage: compliance teams know the legal standard, security teams know which AI tools are active and what data they access — neither team can close the coverage gap without the other. Second, breach notification: HIPAA's 60-day notification requirement applies to AI-mediated PHI incidents, but most IR playbooks were written before AI agents were a meaningful part of the threat surface. Both teams need to jointly update notification assessment processes to cover AI agent incidents, OAuth token compromises, and SaaS integration failures. Third, CPG alignment: using the HHS CPGs as a shared framework gives both teams a common language for assessing gaps and creates documentation with direct value during OCR investigations.

All data: The Agentic Ecosystem Security Gap: 2026 CISO Report. Conducted by Consensuswide, January 27 – February 9, 2026. n=500 U.S. CISOs. Vertical subsets: n=106 Financial Services U.S. CISOs; n=62 Insurance U.S. CISOs; n=52 Healthcare and Life Sciences U.S. CISOs. 

Get Proactive Security for Your Agentic Ecosystem