One employee. One agentic AI tool connected and authorized. That's all it took.

Vercel's April 2026 security bulletin describes what happened: "The incident originated with a compromise of Context.ai, a third-party AI tool used by a Vercel employee. The attacker used that access to take over the employee's Vercel Google Workspace account, which enabled them to gain access to some Vercel environments and environment variables that were not marked as 'sensitive.'"

The investigation is ongoing. Vercel is working with Google Mandiant and other cybersecurity firms. But the confirmed chain is already instructive. A third-party agentic AI vendor was compromised, an employee's OAuth-authorized access became the entry point, and internal environment variables were enumerated and used to go further.

Just the facts

I’ll be precise about what is confirmed and what isn't, because the investigation is still active.

What Vercel and Context.ai have confirmed or officially acknowledged:

Context.ai identified and blocked unauthorized access to their AWS environment in March 2026. At the time, they believed they had contained it. After Vercel disclosed their incident in April, Context.ai investigated further and concluded that during the March breach, the unauthorized actor "also likely compromised OAuth tokens for some of our consumer users." That word "likely" is Context.ai's own. This part of the chain has not been fully confirmed.

Context.ai also states the unauthorized actor "appears to have used" a compromised OAuth token to access Vercel's Google Workspace. Again, their language, not a confirmed finding.

Vercel confirmed the outcome. The attacker gained access to the employee's Google Workspace account, moved into internal environments, and accessed environment variables not marked as sensitive. Vercel CEO Guillermo Rauch explained that the attacker "got further access through their enumeration" of those variables. A limited subset of customers had credentials compromised.

How is Context AI involved?

Context.ai is an agentic productivity platform. Its AI Office Suite deploys agents that connect to the tools you already work in, like Google Workspace, Salesforce, Slack, and code repositories. They can act on your behalf across all of them. The agents capture context from everything you connect and improve over time.

For those agents to work, Context.ai has to hold OAuth tokens on behalf of its users. The tokens sit in Context.ai's backend, ready for the agents to act whenever needed. It's how the product delivers on its promise.

The Vercel employee connected their enterprise Google Workspace account and granted "Allow All" permissions. Context.ai's own advisory notes that Vercel's internal OAuth configurations appeared to allow those broad permissions within the enterprise environment. From every system's perspective, that was a valid, authorized access relationship.

What third parties have reported, but neither company has confirmed

The link between Context.ai and Vercel was not made in either company's initial disclosures. Security researchers independently identified the connection by tying the OAuth app ID Vercel published to a now-removed Context.ai Chrome extension. Context.ai had removed that extension from the Chrome marketplace on March 27; weeks before either company made a public disclosure.

Hudson Rock separately reported that a Context.ai employee was infected with Lumma Stealer malware in February 2026, and suggested this may have been the initial foothold into Context.ai's systems. Neither company has confirmed this in their official statements.

A threat actor using the ShinyHunters persona claimed responsibility on BreachForums, stating the stolen data could enable "the largest supply chain attack ever." That post has since been removed. The real ShinyHunters denied involvement. Attribution is unconfirmed.

 

The access path isn't unique to Vercel

The mechanism here, a third-party AI vendor holding OAuth tokens with those tokens becoming available when the vendor's infrastructure was compromised, is not a Vercel-specific problem. It's how agentic AI tools work.

When a productivity platform needs to act on your behalf across Workspace, Slack, or Salesforce, it needs persistent authorization to do so. That authorization lives somewhere. In most cases, it lives in the vendor's backend. If that backend is compromised, the tokens go with it.

The Vercel employee didn't do anything unusual. Connecting an AI tool to a work account and granting broad permissions is a routine decision, usually made without a security review and without any record in the tools security teams actually monitor. Vercel had no direct visibility into Context.ai's AWS environment. There was no signal to detect until the attacker was already inside.

This also explains why "we have MFA" doesn't close the gap. The OAuth token the attacker used was legitimate, already authorized, and operated without prompting a new login.

What current security tools can't do
  • 87% cannot see sensitive data flows across applications
  • 86.8% cannot see what data AI tools are exchanging with SaaS apps
  • 84.8% cannot detect OAuth token or API key abuse
  • 83.4% cannot distinguish human from non-human behaviors

Source: The Agentic Ecosystem Security Gap: 2026 CISO Report

 

This is happening more than the news cycle shows

The Vercel breach made news because Vercel is significant infrastructure and the potential downstream blast radius is large. Most incidents following this same pattern don't make news. Some aren't detected at all.

Think about what's happening across most organizations right now.

An engineer signs up for an AI coding assistant and connects it to their GitHub repository, their CI/CD pipeline, and their cloud credentials. A customer success team adopts an AI platform that pulls from Salesforce, Zendesk, and Slack to automate account summaries. A marketing manager authorizes an AI content tool that connects to their CMS, Google Analytics, and social accounts. A sales rep links an AI outreach tool that reads their email, calendar, and CRM. A finance analyst connects an AI spreadsheet assistant to their data warehouse.

In each case, an AI vendor is now holding OAuth tokens or API keys that provide standing access to internal systems. The employee made a reasonable productivity decision. Security may not know the connection exists. And if that vendor's infrastructure is compromised — through a stolen credential, an employee infostealer infection, or a misconfigured cloud environment — those tokens are available to whoever got in.

This is the adoption reality in 2026. Agentic AI tools are growing precisely because broad, persistent, cross-system access is what makes them valuable. That value and the risk are the same thing.

Context.ai's own advisory noted that their AI Office Suite had "hundreds of users across many organizations." Vercel happened to be one of them, and happened to be significant enough that the breach surfaced publicly. The others are unknown.

The question isn't whether your organization has employees making these connections. At current adoption rates, the answer is almost certainly yes. The question is whether you can see it.

What to do now

Here's a practical response plan your team can execute using the tools you already have. It's designed to contain the access path first, then rotate the credentials that create the biggest downstream risk, and finally scope for any expansion or persistence.

1. Contain the OAuth blast radius immediately

  • Identify the installation and usage of Google Workspace OAuth app 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com

  • Revoke sessions for affected users

  • Block or restrict the OAuth app tenant-wide while you investigate
  • Review whether the app was admin-consented and confirm the scopes

2. Assume credential exposure and rotate in the right order

  • CI/CD deploy tokens and build secrets
  • Source control tokens (Git providers) and deploy keys
  • Package registry tokens (npm, etc.)
  • Cloud keys and workload credentials
  • Observability and alerting integrations

3. Audit config stores for "non-sensitive" secrets.

  • Export environment variables and config values across production, preview, and staging.

  • Scan for likely secret material using common naming patterns: KEY, TOKEN, SECRET, PRIVATE, PASSWORD, AUTH, BEARER, JWT, CLIENT_SECRET.

  • Assume anything questionable is compromised: move it into your secret manager, rotate it, and update downstream services to use the new value.

     

4. Hunt for access expansion signals

  • New OAuth grants, new apps, scope changes
  • Token creation and key generation events
  • Admin actions that don't match normal patterns
  • Bursts of enumeration (many reads/list operations over short windows)

5. Reduce future blast radius

  • Require approval workflows for new OAuth apps and scope escalations, especially AI tools
  • Limit offline access and refresh tokens where feasible
  • Separate preview and staging credentials from production
  • Default to treating environment variables as sensitive unless there's a specific reason not to

How Vorlon accelerates agentic AI governance

You can run the steps above manually. The problem is time and confidence when evidence is scattered across identity, SaaS admin planes, CI/CD, and dozens of connected apps.

Vorlon helps teams move faster in three places.

  1. Immediate visibility into the integration blast radius. See which third-party apps and AI tools have OAuth access, what scopes they have, and which identities authorized them. Surface dormant but privileged authorizations before they become incident scope.
  2. Faster scoping of what the agent touched. Correlate activity across human identities, tokens, service accounts, and connected apps. Spot abnormal enumeration and unusual data movement across the ecosystem, not just within a single application.
  3. Targeted remediation. Prioritize revocation and rotation by blast radius so you contain quickly without breaking business workflows.

The question to ask this week

Do you know which AI agents have OAuth access to Google Workspace, Salesforce, or your code repositories right now? Could you enumerate them in an afternoon and revoke access within minutes if you had to?

If the honest answer is no, the Vercel incident shows exactly what that gap looks like when it's used.

Know where your data flows. Vorlon.io


About the survey data

Statistics cited in this article are drawn from The Agentic Ecosystem Security Gap: 2026 CISO Report. The survey was conducted by Consensuswide, an independent research firm and member of the Market Research Society (MRS) and British Polling Council (BPC), adhering to the MRS Code of Conduct and ESOMAR principles. It surveyed 500 U.S. CISOs across all major industry verticals between January 27 and February 9, 2026. All respondents represented organizations with 500 or more employees. The survey covered SaaS and AI ecosystem security posture, tooling, incidents, and preparedness for 2025. All statistics are verified against raw survey data.

Get Proactive Security for Your Agentic Ecosystem