Double Agents: Exposing Security Blind Spots in GCP Vertex AI
Unit 42 discovers privilege escalation flaw in GCP Vertex AI allowing compromised agents to exfiltrate data.
Summary
Palo Alto Networks' Unit 42 revealed a critical vulnerability in Google Cloud Platform's Vertex AI where overprivileged AI service agents can be weaponized to escalate privileges and exfiltrate sensitive data across GCP projects. By exploiting default permission scoping in the Per-Project, Per-Product Service Agent (P4SA) model, researchers demonstrated unauthorized access to consumer project data and restricted images within Google's infrastructure. Google has since updated documentation and modified the ADK deployment workflow to address the security gaps.
Full text
Threat Research CenterThreat ResearchMalware Malware Double Agents: Exposing Security Blind Spots in GCP Vertex AI 11 min read Related ProductsCortexCortex CloudPrisma AIRSUnit 42 AI Security AssessmentUnit 42 Incident Response By:Ofir Shaty Published:March 31, 2026 Categories:MalwareThreat Research Tags:Agentic AIData exfiltrationGCPGoogle CloudGoogle cloud storageJSONLLMPrivilege escalationVertex AI Share Executive Summary Artificial intelligence (AI) agents are quickly advancing into powerful autonomous systems that can perform complex tasks. These agents can be integrated into enterprise workflows, interact with various services and make decisions with a degree of independence. Google Cloud Platform’s Vertex AI, with its Agent Engine and Application Development Kit (ADK), provides a comprehensive platform for developers to build and deploy these sophisticated agents. But what if the AI agent you just deployed was secretly working against you? As we delegate more tasks and grant more permissions to AI agents, they become a prime target for attackers. A misconfigured or compromised agent can become a “double agent” that appears to serve its intended purpose, while secretly exfiltrating sensitive data, compromising infrastructure, and creating backdoors into an organization's most critical systems. Our research examines how a deployed AI agent in the Google Cloud Platform (GCP) Vertex AI Agent Engine could potentially be weaponized by an attacker. By exploiting a significant risk in default permission scoping and compromising a single service agent, we reveal how the Vertex AI permission model can be misused, leading to unintended consequences. We were able to achieve privileged access to data in a consumer project, and to restricted images and source code within a producer project that is part of Google’s infrastructure. Following this discovery, we shared details of our research with Google and collaborated with their security team. Google revised their official documentation to explicitly document how Vertex AI uses resources, accounts and agents. Our findings provide valuable insights into the inner workings of the Vertex AI platform and demonstrate how an AI agent could be weaponized to compromise an entire GCP environment. Palo Alto Networks customers are better protected from the threats described in this article through the following products and services: Prisma AIRS Cortex Cloud Identity Security Cortex AI-SPM The Unit 42 AI Security Assessment can help empower safe AI use and development. If you think you might have been compromised or have an urgent matter, contact the Unit 42 Incident Response team. Related Unit 42 Topics Agentic AI, Vertex AI, Google Cloud, Data Exfiltration, Privilege Escalation From Agent to Storage Admin: Taking Over Consumer Resources We started our investigation by deploying an AI agent that we built using Google Cloud ADK. We discovered that the Per-Project, Per-Product Service Agent (P4SA) associated with the deployed AI agent had excessive permissions that were granted by default. A service agent is a Google-managed service account that allows a GCP service to access resources. Using the P4SA’s default permissions, we were able to extract the credentials of the following service agent and act on behalf of its identity: service-<PROJECT-ID>@gcp-sa-aiplatform-re.iam.gserviceaccount[.]com The following code shows how we prepared a Vertex AI agent in a controlled environment, using a tool that is configured to expose service‑agent credentials. ### Init ### vertexai.init( project=PROJECT_ID, location=LOCATION, staging_bucket=STAGING_BUCKET, ) ### Functions and Tools definition ### def get_service_agent_credentials(test: str) -> dict: <*** my malicious agent code ***> ### Agent definition ### from google.adk.tools import google_search from google.adk.agents import Agent root_agent = Agent( name="my_double_agent", model="gemini-2.0-flash", description=("Agent to takeover the account."), instruction=("You are a helpful agent who can help the user exfiltrate data from storage buckets."), tools=[get_service_agent_credentials], ) ### Prepare your agent for Agent Engine ### from vertexai.preview import reasoning_engines app = reasoning_engines.AdkApp( agent=root_agent, enable_tracing=True, ) ### Deploy Agent ### from vertexai import agent_engines remote_app = agent_engines.create( agent_engine=root_agent, requirements=[ "google-cloud-aiplatform[adk,agent_engines,requests,socket,subprocess,os]" ], display_name="testing-with-reverese" ) 12345678910111213141516171819202122232425262728293031323334353637 ### Init ###vertexai.init( project=PROJECT_ID, location=LOCATION, staging_bucket=STAGING_BUCKET,)### Functions and Tools definition ###def get_service_agent_credentials(test: str) -> dict: <*** my malicious agent code ***> ### Agent definition ###from google.adk.tools import google_searchfrom google.adk.agents import Agentroot_agent = Agent( name="my_double_agent", model="gemini-2.0-flash", description=("Agent to takeover the account."), instruction=("You are a helpful agent who can help the user exfiltrate data from storage buckets."), tools=[get_service_agent_credentials],) ### Prepare your agent for Agent Engine ###from vertexai.preview import reasoning_enginesapp = reasoning_engines.AdkApp( agent=root_agent, enable_tracing=True,) ### Deploy Agent ###from vertexai import agent_enginesremote_app = agent_engines.create( agent_engine=root_agent, requirements=[ "google-cloud-aiplatform[adk,agent_engines,requests,socket,subprocess,os]" ], display_name="testing-with-reverese") Since this discovery, Google has modified the ADK deployment workflow. As a result, the code snippet above reflects the previous process and may not function correctly in the current version. Running the preparation and deployment code generated a malicious AI agent packaged as a pickle file, which was then deployed as an Agent Engine. The resulting deployment output is illustrated in Figure 1. Figure 1. Agent deployment output. After deploying the malicious AI agent, any call to the agent results in our tool sending a request to Google’s metadata service: hxxp[:]//metadata.google[.]internal/computeMetadata/v1/instance/?recursive=true This call prompts the double agent to extract the credentials of the GCP Service Agent. Figure 2 highlights the extracted credentials and service agent details, presented in JSON format. Figure 2. Malicious agent response, containing service agent credentials. The extracted information includes: The GCP project that hosts the AI agent The identity of the AI agent The scopes of the machine that hosts the AI agent Reformatting the JSON output provides an easy to read version of the information, shown in Figure 3. Figure 3. Reformatted output showing extracted information. Using the stolen credentials, we were able to pivot from the AI agent’s execution context into the consumer project. This effectively broke isolation and granted unrestricted read access to all Google Cloud Storage Buckets data within the consumer project. (For organizations that use GCP managed services, the consumer project is their own Google Cloud project.) This level of access constitutes a significant security risk, transforming the AI agent from a helpful tool into an insider threat. The excessive permissions include: storage.buckets.get storage.buckets.list storage.objects.get storage.objects.list Figure 4 shows the full permissions from Google’s documentation, with the Google Cloud Storage Bucket and AI Platform Endpoint permissions highlighted. Figure 4. Vertex AI Reasoning Engine Service Agent permissions. Unauthorized Access to Google's Internals: Downloading Restricted Producer Images Having compromised the consumer environment, we turned our attention to the producer environment. The producer project is the Google‑managed project that hosts the underlying service – in this case, Vertex AI. We discovered that the stolen P4SA credentials also grant
Indicators of Compromise
- email — service-<PROJECT-ID>@gcp-sa-aiplatform-re.iam.gserviceaccount.com