Back to Feed
Cloud SecurityApr 24, 2026

Cracks in the Bedrock: Agent God Mode

Unit 42 discovers "Agent God Mode" privilege escalation flaw in Amazon Bedrock AgentCore starter toolkit.

Summary

Palo Alto Networks' Unit 42 revealed a critical privilege escalation vulnerability in Amazon Bedrock AgentCore's starter toolkit, where auto-generated IAM roles grant overly broad permissions across AWS accounts rather than adhering to least privilege principles. An attacker compromising a single agent could escalate privileges, exfiltrate ECR images, access other agents' memories, and invoke code interpreters. AWS has updated documentation to warn that default roles are only for development and testing, not production use.

Full text

Threat Research CenterThreat ResearchMalware Malware Cracks in the Bedrock: Agent God Mode 8 min read Related ProductsCortexCortex CloudUnit 42 AI Security AssessmentUnit 42 Cloud Security AssessmentUnit 42 Incident Response By:Ori Hadad Published:April 8, 2026 Categories:MalwareThreat Research Tags:AgentcoreAI agentsAWSBedrockDNS tunnelingExfiltrationIAMIdentityKillchainPrivilege escalationSandbox Share Executive Summary Our first article about the boundaries and resilience of Amazon Bedrock AgentCore focused on the Code Interpreter sandbox, and how it can be bypassed using DNS tunneling. In this second part, we delve into the identity and permissions model of AgentCore and the AgentCore starter toolkit. This toolkit is described by AWS as “a Command Line Interface (CLI) toolkit that you can use to deploy AI agents to an Amazon Bedrock AgentCore Runtime.” This toolkit abstracts backend provisioning complexity by automating the creation of runtimes, Amazon Elastic Container Registry (ECR) images and execution roles. We discovered that the toolkit’s auto-create logic generates identity and access management (IAM) roles that grant privileges broadly across the AWS account, rather than being scoped to individual resources. While the toolkit makes it easy to quick-start with AgentCore, the default deployment configuration model favors this deployment ease over a strict adherence to the principle of least privilege. ​​The starter toolkit’s default deployment configuration introduces an attack vector that we call Agent God Mode, because the overly broad IAM permissions effectively grant an individual agent the “omniscient” ability to escalate privileges and compromise every other AgentCore agent within the AWS account. Our investigation uncovered a multi-stage attack chain that exploits this excessive access. We found that an attacker who compromises an agent could: Exfiltrate proprietary ECR images Access other agents’ memories Invoke every code interpreter Extract sensitive data We disclosed our findings to the AWS Security team. Following our disclosure, the AWS documentation was updated to include a security warning, stating that the default roles are "designed for development and testing purposes" and are not recommended for production deployment, as shown in Figure 1. Figure 1. AWS starter toolkit updated documentation warning note. Palo Alto Networks customers are better protected from the threats discussed in this article through the following products and services: Cortex AI-SPM Cortex Cloud Identity Security The Unit 42 AI Security Assessment The Unit 42 Cloud Security Assessment If you think you might have been compromised or have an urgent matter, contact the Unit 42 Incident Response team. Related Unit 42 Topics Cloud, IAM, Privilege Escalation Technical Analysis Identity and permissions are two of the most critical pillars of setting boundaries and maintaining isolation in cloud workloads and applications. We explain the default IAM roles and permissions that are provisioned by the AgentCore starter toolkit, to demonstrate how compounding attack primitives ultimately enables a full attack chain. The Default Deployment Architecture We began our analysis by evaluating the default IAM roles that the toolkit’s setup process automatically generates. The agentcore launch command automates the infrastructure provisioning required for an AI agent. Based on the user's configuration, the toolkit creates: The AgentCore Runtime A memory store An ECR Repository An IAM execution role Figure 2 shows this configuration, created with the Agent Name ori_agent_01. Figure 2. Starter toolkit configuration. Upon execution, the toolkit confirms the deployment and associated resources, as shown in Figure 3. Figure 3. Starter toolkit deployment. Although the toolkit simplifies the setup, the auto-create configuration for the execution role introduces a significant security risk. Cross-Agent Data Access AgentCore agents rely on memory resources to store both long and short-term conversation state and context. An attacker who gains read access to this resource could exfiltrate sensitive interaction data between the AI agent and its users. The default IAM policy generated by the toolkit reveals the permission set, as Figure 4 shows. Figure 4. BedrockAgentCoreMemory policy statement. The policy applies actions such as GetMemory and RetrieveMemoryRecords to the wildcard memory resource arn:aws:bedrock-agentcore:*:memory/*. This effectively allows the agent whose role was assigned with this policy to read the memories of all other agents in the account. Since the default role permits access to “*”, any AI agent can read or poison the state of any other AI agent in the account. The last piece required for exploitation is the knowledge of the target’s unique MemoryID. Indirect Privilege Escalation AgentCore Runtime utilizes Code Interpreter to execute dynamic logic. Crucially, these interpreters operate under their own distinct IAM roles, separate from Agent Runtime. This means that when an agent invokes the interpreter, the resulting actions are performed using the interpreter's permissions, not the agent's. The default policy indicates that the InvokeCodeInterpreter action is granted on all Code Interpreter resources (*), as Figure 5 shows. Figure 5. BedrockAgentCoreCodeInterpreter policy statement. These permissions introduce the risk of a direct exploitation cycle. Using a compromised AI agent, an attacker could perform reconnaissance to list available interpreters, identify a high-privileged target, and attempt to pivot by executing code within that context. ECR Exfiltration Perhaps the most critical finding relates to the Elastic Container Registry (ECR). As AgentCore Runtimes are distributed as Docker images, the default policy grants the AI agent unrestricted ability to pull images from any repository (arn:aws:ecr:*:repository/*) within the account. Figure 6 details this specific part of the policy. Figure 6. ECR policy statements. This configuration creates a high-risk exfiltration vector. From a compromised agent, an attacker could generate an authentication token to download source code, proprietary algorithms, internal files and other sensitive data from images of other agents and unrelated workloads across the entire account. First, the attacker retrieves a valid ECR authorization token, as Figure 7 shows. Figure 7. Retrieve authorization token using agent’s role. With these credentials, the attacker authenticates the Docker CLI and pulls the image of a target agent – or any other container in the registry – as detailed in Figure 8. Figure 8. Pulling another agent’s image using a previously retrieved token. After downloading the image, the attacker has full read access to the target's file system, as Figure 9 shows. Figure 9. Exploring image content. Bypassing the Memory ID Barrier As noted in the Cross-Agent Data Access section, the primary barrier to cross-agent memory poisoning is the obscurity of the target's MemoryID. The ECR exfiltration vulnerability eliminates this constraint. As Figure 10 shows, an attacker can recover configuration details that are baked into the container or environment files, by performing static analysis on the downloaded Docker image. Figure 10. Extracting memory ID. The env-output.txt file that can be found within the image contains the following target identifier: BEDROCK_AGENTCORE_MEMORY_ID=ori_agent_01_mem-AsDiQiDikR The Kill Chain By abusing the default permission configurations, an attacker could: Exfiltrate: Leverage ECR permissions to download the image of a high-value target. Extract: Recover the MemoryID from the container's static configuration. Execute: Use the ID to dump or poison the target's conversation history. This completes the attack vector. The AgentCore starter toolkit God Mode permissions allow an attacker who compromises an initial agent to exfiltrate the source code of a target, extract the specific resou

Indicators of Compromise

  • mitre_attack — T1087 - Account Discovery
  • mitre_attack — T1528 - Steal Application Access Token
  • mitre_attack — T1548 - Abuse Elevation Control Mechanism

Entities

Amazon Web Services (vendor)Amazon Bedrock AgentCore (product)AgentCore Starter Toolkit (product)Palo Alto Networks (vendor)Unit 42 (product)