A vulnerability researchers are calling ClaudeBleed allows any Chrome extension — one with zero special permissions — to hijack Claude's AI agent and instruct it to exfiltrate files from Google Drive, forward Gmail messages, steal private GitHub code, and send email as the victim. The attack requires no user interaction beyond having Claude's extension installed.
Claude's Chrome extension has over 7 million users. The flaw was disclosed May 9, 2026 by Travis Lelle and analyzed by LayerX Security. Anthropic released a patch in version 1.0.70 on May 6 — researchers bypassed it within three hours.
This is a "confused deputy" attack. The browser extension is the deputy — it holds permissions and trust that the malicious extension does not. By sending a single crafted message to the Claude extension, an attacker inherits those permissions without ever requesting them from the browser or the user.
How It Works
Claude's extension declares externally_connectable in its manifest, allowing external Chrome extensions to send it messages — provided those messages appear to come from https://claude.ai/*. The flaw: a malicious extension can inject a content script into any open claude.ai tab, and that content script runs in the trusted page context. From there it can relay messages that the Claude extension cannot distinguish from legitimate ones.
The attack payload is a single JavaScript call:
chrome.runtime.sendMessage(CLAUDE_EXTENSION_ID, {
type: "onboarding_task",
payload: "<malicious instructions>"
});
Claude's extension receives this, treats it as a legitimate task from the claude.ai page, and executes it using all the AI agent's connected capabilities — which for most users include Google Drive, Gmail, and GitHub through Claude's integration layer.
What Attackers Demonstrated
LayerX researchers successfully demonstrated the following using a zero-permission extension:
- Exfiltrate files from Google Drive
- Access and forward Gmail messages to an attacker-controlled address
- Steal private GitHub repository code
- Send email as the victim
- Delete all evidence of the above activity
The extension performing these actions appeared in Chrome's permission dialog as a low-risk install — no access to browsing history, no access to tabs, no access to any site data. The abuse happens entirely through the trusted channel to Claude's extension.
chrome.runtime.sendMessage delivers a crafted task payload — Claude's extension cannot verify the senderWhy the Patch Failed
Anthropic shipped version 1.0.70 on May 6 as a response. LayerX bypassed it within three hours by switching to "privileged" mode and exploiting side panel initialization flows — a secondary trust boundary with the same fundamental weakness. The root issue was not addressed: the extension trusts messages based on origin alone, without verifying execution context or sender identity.
Four Structural Weaknesses
LayerX identified the architecture as systematically flawed, not a one-off bug:
The researchers note that three major vulnerabilities emerged in Claude's extension within six months. This is a systemic architectural problem, not an isolated oversight. The extension's trust model was not designed with adversarial extensions in mind.
Immediate Mitigations
Until Anthropic ships a fix that addresses the underlying architecture:
- Audit every Chrome extension in your environment — remove anything non-essential
- Disconnect Claude's integrations with Drive, Gmail, and GitHub from the Claude settings panel if you don't actively use them
- Treat any zero-permission extension in the Chrome Web Store as a potential attack vector if Claude's extension is also installed
- Enterprise teams: consider blocking the Claude extension via policy until the underlying trust model is fixed
The combination of AI agents with broad integration access and browser extensions with weak message authentication is a new and largely uncharted attack surface. ClaudeBleed is the clearest demonstration so far that the threat is real, the exploitation is straightforward, and the patches can be bypassed in hours.