Back to Feed
Supply ChainApr 15, 2026

‘By Design’ Flaw in MCP Could Enable Widespread AI Supply Chain Attacks

Architectural flaw in Anthropic's MCP allows unsanitized command execution, enabling AI supply chain attacks.

Summary

Researchers at OX Security discovered a critical architectural flaw in Anthropic's Model Context Protocol (MCP) that allows arbitrary commands to execute silently without sanitization, potentially compromising millions of downstream users. The flaw affects STDIO MCP servers across enterprises using agentic AI, and despite coordinated disclosure efforts, Anthropic has declined to fix the underlying design issue, instead recommending cautious use. OX warns this could enable widespread data theft, API key exposure, malware installation, and system takeover across the AI supply chain.

Full text

Model Context Protocol (MCP) has been a boon to agentic AI users and is widely used and trusted locally by companies adopting agentic AI internally. Introduced by Anthropic in November 2024, it provides a standard connector between agents and data. Enterprises use it locally to avoid the pain of developing their own connectors, and it is in widespread use as a local STDIO MCP server. There are multiple providers of MCP servers, almost all inheriting Anthropic’s code. The problem, reports OX Security, is what it terms an architectural flaw in Anthropic’s MCP code embedded within most of these local STDIO MCPs. In a nutshell, OX Security says this flaw can result in a complete adversarial takeover over the user’s computer system. “And the exploit mechanism is straightforward, MCP’s STDIO interface was designed to launch a local server process. But the command is executed regardless of whether the process starts successfully,” reports OX. “Pass in a malicious command, receive an error – and the command still runs. No sanitization warnings. No red flags in the developer toolchain. Nothing.” OX extensively tested whether this ‘flaw’ was exploitable, extensively succeeded, and extensively disclosed its findings to the MCP providers; from Anthropic downward. Initially it had little response. Eventually, the common response was inaction coupled with the suggestion that this behavior was ‘by design’. Advertisement. Scroll to continue reading. But OX discovered, and demonstrated, that this ‘by design’ behavior could be easily exploited, leaving potentially millions of downstream users exposed to sensitive data, API key and internal corporate data theft, the exposure of chat histories, and more. If the process that MCP failed included malware, that malware could be silently installed, potentially leading to complete system takeover. Eventually, the only apparent action from Anthropic was to quietly update its security guidance to recommend MCP adapters be used ‘with caution’ – “leaving the flaw intact and shifting responsibility to developers”. This is an interesting position to take. It suggests that developers are responsible for the security of what they develop, which is fair. It possibly also suggests that any company so breached is not the responsibility of Anthropic, but the fault of misconfiguring the MCP installation – which certainly does happen. And to be fair, GitHub’s own installation was an exception to the OX testing, proving that security gating on installation is possible. Learn More at the AI Risk Summit | Ritz-Carlton, Half Moon Bay But the sheer volume of successful compromises conducted by OX demonstrates that the developers installing MCP servers are failing to install successfully. This should be no surprise when AI is automating so many aspects of security and lowering the bar of security competence among developers. The OX position is that Anthropic should take responsibility and fix this ‘architectural flaw’ itself. Without doing so, it is leaving industry open to “the mother of all supply chain attacks”, starting from Anthropic, fanning out to many thousands of local MCP users, and from those compromised systems to who knows how many other servers. During its research, OX adopted a coordinated disclosure process, leading to more than 30 accepted disclosures and more than 10 high and critical vulnerabilities patched. But the underlying design flaw, it says, remains, leaving millions of users and thousands of systems exposed to unauthorized access. “The current implementation of the Model Context Protocol places the entire burden of security on the downstream developer – a structural failure that guarantees vulnerability at scale.” The OX report on its findings includes details on how Anthropic could solve the problem by deprecating unsanitized STDIO connections, introducing protocol level command sandboxing, including a ‘dangerous mode’ explicit opt-in, and developing marketplace verification standards to include a standardized security manifest. In the meantime, any company adopting STDIO MCP as part of an agentic AI development should do so ‘with caution’. Related: Anthropic MCP Server Flaws Lead to Code Execution, Data Exposure Related: Top 25 MCP Vulnerabilities Reveal How AI Agents Can Be Exploited Related: The New Rules of Engagement: Matching Agentic Attack Speed Related: Anthropic Unveils ‘Claude Mythos’ – A Cybersecurity Breakthrough That Could Also Supercharge Attacks Written By Kevin Townsend Kevin Townsend is a Senior Contributor at SecurityWeek. He has been writing about high tech issues since before the birth of Microsoft. For the last 15 years he has specialized in information security; and has had many thousands of articles published in dozens of different magazines – from The Times and the Financial Times to current and long-gone computer magazines. More from Kevin Townsend ‘Mythos-Ready’ Security: CSA Urges CISOs to Prepare for Accelerated AI ThreatsBrowserGate: Claims of LinkedIn ‘Spying’ Clash With Security Research FindingsCan We Trust AI? No – But Eventually We MustAnthropic Unveils ‘Claude Mythos’ – A Cybersecurity Breakthrough That Could Also Supercharge AttacksMobile Attack Surface Expands as Enterprises Lose ControlCritical Vulnerability in Claude Code Emerges Days After Source LeakStolen Logins Are Fueling Everything From Ransomware to Nation-State CyberattacksVenom Stealer Raises Stakes With Continuous Credential Harvesting Latest News Capsule Security Emerges From Stealth With $7 Million in Funding100 Chrome Extensions Steal User Data, Create BackdoorCISO Conversations: Ross McKerchar, CISO at SophosMirax RAT Targeting Android Users in EuropeTwo Vulnerabilities Patched in Ivanti Neurons for ITSM $10 Domain Could Have Handed Hackers 25k Endpoints, Including in OT and Gov NetworksTrump Urges Extending Foreign Surveillance Program as Some Lawmakers Push for US Privacy ProtectionsFortinet Patches Critical FortiSandbox Vulnerabilities Trending Daily Briefing Newsletter Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts. Webinar: A Step-by-Step Approach to AI Governance April 28, 2026 With "Shadow AI" usage becoming prevalent in organizations, learn how to balance the need for rapid experimentation with the rigorous controls required for enterprise-grade deployment. Register Virtual Event: Threat Detection and Incident Response Summit May 20, 2026 Delve into big-picture strategies to reduce attack surfaces, improve patch management, conduct post-incident forensics, and tools and tricks needed in a modern organization. Register People on the MoveThe United States Department of War appointed David Vaughn as Technical Advisor for Data Infrastructure.Black Duck has named Dom Glavach as Chief Information Security Officer.Finite State has named Ann Miller as Vice President of Marketing.More People On The MoveExpert Insights The Hidden ROI of Visibility: Better Decisions, Better Behavior, Better Security Beyond monitoring and compliance, visibility acts as a powerful deterrent, shaping user behavior, improving collaboration, and enabling more accurate, data-driven security decisions. (Joshua Goldfarb) The New Rules of Engagement: Matching Agentic Attack Speed The cybersecurity response to AI-enabled nation-state threats cannot be incremental. It must be architectural. (Nadir Izrael) The Next Cybersecurity Crisis Isn’t Breaches—It’s Data You Can’t Trust Data integrity shouldn’t be seen only through the prism of a technical concern but also as a leadership issue. (Steve Durbin) Why Agentic AI Systems Need Better Governance – Lessons from OpenClaw Agentic AI platforms are shifting from passive recommendation tools to autonomous action-takers with real system access, (Etay Maor) The Human IOC: Why Security Professionals Struggle with Social Vetting Applying SOC-level rigor to the rumors, politics, and 'human intel

Entities

Anthropic (vendor)Model Context Protocol (MCP) (product)Claude (product)OX Security (vendor)STDIO (technology)