Agentic security was the defining theme of RSAC 2026. That much has been well-reported. But theme coverage rarely captures what actually happens when you sit through the sessions themselves. I spent time at four of the most attended presentations of the week: Cisco, CrowdStrike, Microsoft, and a joint session from Okta and Box. These aren't scrappy startups staking out territory. They're the companies with the largest security R&D budgets, the deepest enterprise customer relationships, and the longest runways to study this problem. I wanted to understand how the best-resourced teams in the industry are framing agentic security: what they agree on, where their thinking diverges, and where the honest gaps are.
What I found was more interesting than a product announcement cycle. There was genuine convergence on the diagnosis, real differences in approach, and one unresolved problem that every presenter named but none solved.
Here's what I took away.1
1. Discovery is not optional. It's the prerequisite for everything else.
Every session opened the same way. Microsoft's Neta Haiby asked the room to raise their hands if they were already using AI agents. Every hand went up. Then she asked how many could actually observe what those agents were doing. Nobody raised their hand.2
That exchange landed because it made the problem immediate. We're not talking about a risk that's coming. It's already inside the organization. The agents are running. The data is moving. Most security teams just can't see it.
Okta's Sandeep Kumbhat put the scale in concrete terms. From Q1 2024 to late 2025, enterprise adoption of AI agents went from 1% of organizations to 91%.3 Anything but gradual, it's happening faster than any security frameworks, governance programs, or tooling could possibly follow.
The first move, before any security control makes sense, is to find out what you actually have. Not just the agents IT deployed. The ones developers built and connected to production systems without a security review. The ones business units stood up using low-code platforms. The ones that came bundled with SaaS tools your procurement team approved six months ago. Start there.
2. Agents need to be treated as identities. Right now, not eventually.
Cisco, Microsoft, and Okta arrived at this conclusion independently and expressed it in nearly identical language: an agent without an assigned identity is an agent you cannot govern, trace, or remediate.
CrowdStrike took the implications further with a threat category they call LOTAIL (Living Off the AI Land). The idea is that an adversary doesn't need to compromise an agent directly. They find one that's already highly permissioned, already operating with valid credentials, already doing things that look like normal agent behavior, and they use it. There's no anomaly to detect because the agent looks exactly like it's supposed to. The reason it works is that most agents lack an identity baseline, behavioral history, and an audit trail. There's nothing to compare against.
The fix is conceptually straightforward. Assign every agent an identity, give it an owner, scope its permissions to the minimum required for its actual job, and manage it through a lifecycle the same way you manage a human user account. When that owner leaves the organization, the agent's ownership transfers. When the agent's job changes, its permissions change. When it's no longer needed, it gets decommissioned.
Most organizations aren't close to this yet. But the frameworks exist. IAM teams already know how to do this for humans. Extending that discipline to agents is the starting point.
3. Zero Trust needs to evolve. The question has changed.
Cisco's Matt Caulfield made the cleanest architectural argument of the four sessions. Zero Trust was built to answer one question: is this identity authenticated? For agents, that question isn't enough. Agents don't just access resources. They take actions, chain workflows across systems, and make decisions based on their own reasoning. The relevant question isn't whether the agent is authenticated. It's whether this specific action, at this specific moment, is consistent with what the agent was built to do.
That shift from "verify identity" to "verify intent" has real consequences for how you build controls. Resource-level authorization, which most access governance tools use, isn't sufficient for agents. You need action-level authorization. The agent's identity tells you who it is. Its intent profile tells you what it's allowed to do.
Microsoft's Neta Haiby introduced a framework for defining that intent across four dimensions: organizational (your policies and boundaries), role-based (what the agent's job actually is), developer (why it was built and what constraints were built in), and user (what the person is asking for in the moment).4 The hierarchy matters: organizational intent takes precedence over all others. An agent that's acting within its authentication scope but outside its organizational intent is a security event, not a product issue, not a user error.
The car dealership story she told illustrated this better than any technical diagram. A dealer deployed an agent to assist customers in choosing vehicles. Within hours, users were using it to write Python code, ask it to recommend competitor cars, and ultimately to purchase a vehicle for $1. The agent was fully authenticated, operating within its technical permissions, and completely off the rails from its intended purpose. No existing access control would have caught it.
4. MCP is the new supply chain. Treat it like one.
Okta's Kumbhat called MCP "USB-C for AI," a universal connector that standardizes how agents interface with tools and data sources. The analogy is good. So is the risk profile that comes with it.
The MCP server registry grew from 100 servers to over 6,000 in roughly twelve months.5 Every one of those servers is a potential supply chain attack vector, and most organizations have no formal process for evaluating them before connection.
CrowdStrike documented what happens when that gap gets exploited. A supply chain attack they analyzed involved the top-downloaded OpenClaw skill on ClawHub, which appeared entirely legitimate. It introduced a required dependency that linked to malicious infrastructure. When an agent executed it, the dependency downloaded macOS data exfiltration malware.6 Zero alerts fired. The agent was using sanctioned APIs with valid credentials. Nothing in the behavior deviated from what an authorized agent looks like.
They also documented something called tool shadowing. An attacker publishes a tool whose description shapes how the LLM reasons about a completely separate legitimate tool. In the example from their slides, a fake calculate_metrics tool included a hidden instruction to add the attacker's address to the BCC field whenever a legitimate email tool was used. The malicious tool never ran. It just changed what the real tool did by influencing the agent's reasoning before execution.
Neither attack looks like an attack in any traditional telemetry. Both are live in the wild today.
Treat every MCP server as you would a third-party software dependency. Scan it before you connect it. Sandbox it before you trust it. Register it before it touches production.
5. Behavioral baselines are the unsolved problem. And everyone knows it.
Every presenter, to include Cisco, CrowdStrike, Microsoft, and Okta, recommended that organizations establish behavioral baselines for their agents. Know what normal looks like. Detect when agents deviate. Respond before the damage spreads.
Not one of them explained how to do it.
That's an honest reflection of where the field is. Behavioral baselines for AI agents are fundamentally harder than behavioral baselines for humans or traditional applications. A deterministic application produces predictable telemetry. An AI agent reasons its way through tasks, deciding which APIs to call, which data to retrieve, which actions to chain, and its behavior changes as its context changes. It's non-deterministic by design. Defining "normal" for something that doesn't behave the same way twice requires something more than a log of past actions.
VentureBeat's post-conference analysis put it plainly: "No vendor currently provides an out-of-the-box behavioral baseline to define 'normal' agent activity."7
That observation captures the industry's honest state following RSAC 2026. The diagnosis is sharp. The frameworks are useful. The hardest operational problem, how to actually detect when an agent is behaving in a way it shouldn't, remains open.
The five points form a sequence: find your agents, give them identities, evolve your access controls, lock down your MCP supply chain, and build behavioral baselines. The first four are tractable today using existing frameworks that have been extended to cover agents. The fifth requires architecture that doesn't yet exist at scale in any major platform.
That's the work ahead. The industry named it clearly at RSAC 2026.
Footnotes
1. Sessions analyzed: Cisco, "From Chatbots to Change Agents: Securing Agentic AI Operations" (Matt Caulfield, VP Product Management, Identity; Kevin Kennedy, VP Product and Solutions, Security); CrowdStrike, "The Post-Prompt World: Securing AI Agents" (Oliver Friedrichs, GM AIDR; Sourabh Satish, VP Engineering); Microsoft, "Security, Governance and Control for Agentic AI" (Neta Haiby, Partner PM AI Security; Tina Ying, Director Product Marketing); Okta and Box, "Shadow AI: Securing the Rise of the Autonomous Super Admin" (Sandeep Kumbhat, Global Field CTO, Okta; Akhila Nama, Head of Enterprise Security, Box). All sessions, RSAC 2026.
2. Verbatim from session transcript, Microsoft RSAC 2026. Neta Haiby to live audience.
3. Verbatim from session transcript, Okta/Box RSAC 2026. Sandeep Kumbhat: "If you look at the AI agent enterprise adoption in 2024, which is the first quarter of 2024, we're seeing 1% of the organizations adopting AI technologies of any form. And you go fast forward seven quarters, you see 91% of these organizations have been adopting AI in some capacity."
4. Verbatim from session transcript, Microsoft RSAC 2026. Framework introduced by Neta Haiby.
5. Okta RSAC 2026 presentation slides. Exact timeline described as approximately twelve months.
6. CrowdStrike RSAC 2026 presentation slides. Source cited in slide: Jason Meller, "From Magic to Malware: How OpenClaw's Agent Skills Become an Attack Surface," 1Password Blog, February 2, 2026. https://1password.com/blog/from-magic-to-malware-how-openclaws-agent-skills-become-an-attack-surface
7. VentureBeat post-conference analysis, RSAC 2026. "CrowdStrike, Cisco and Palo Alto Networks all shipped agentic SOC tools at RSAC 2026 — the agent behavioral baseline gap survived all three."



