Check Point Research team found multiple vulnerabilities in Anthropic’s Claude Code AI coding assistant that could lead to remote code execution and API key theft. The vulnerabilities abuse features such as Hooks, MCP servers, and environment variables to run arbitrary shell commands and exfiltrate Anthropic API credentials when users clone and open untrusted repositories.
“Critical vulnerabilities, CVE-2025-59536 and CVE-2026-21852, in Anthropic’s Claude Code enabled remote code execution and API key theft through malicious repository-level configuration files, triggered simply by cloning and opening an untrusted project.” reads the report published by Check Point Research.
“Built-in mechanisms—including Hooks, MCP integrations, and environment variables—could be abused to bypass trust controls, execute hidden shell commands, and redirect authenticated API traffic before user consent”
Researchers found that Claude Code’s project-level configuration files can act as an execution layer, allowing the attackers to abuse a single malicious repository as an attack vector. Simply cloning and opening a crafted repo could trigger hidden commands, bypass consent safeguards, steal Anthropic API keys, and pivot from a developer’s workstation into shared enterprise cloud environments, without visible warning.
The risks include silent command execution via abused Hooks, consent bypass in the Model Context Protocol (CVE-2025-59536), and API key exfiltration before trust confirmation (CVE-2026-21852), potentially exposing broader AI-driven workflows.
Anthropic’s API Workspaces feature lets multiple API keys share access to cloud-stored project files. Since files belong to the entire workspace and not just one API key, stealing a single key could let attackers access, change, or delete shared data, upload harmful content, and create unexpected charges. This behavior puts the whole team at risk, not just one developer.
The flaws highlight a new AI supply chain threat: repository configuration files now act as execution logic, so simply opening an untrusted project can trigger abuse. Anthropic addressed the issues by tightening trust prompts, blocking external tool execution, and restricting API calls until user approval.
“AI-powered coding tools are rapidly becoming part of enterprise development workflows. Their productivity benefits are significant, but so is the need to reassess traditional security assumptions.
Configuration files are no longer passive settings. They can influence execution, networking, and permissions.” concludes the report. “As AI integration deepens, security controls must evolve to match the new trust boundaries.”
Follow me on Twitter: @securityaffairs and Facebook and Mastodon
(SecurityAffairs – hacking, Claude)