Back to Feed
AI SecurityMar 18, 2026

“Claudy Day” Flaws Allow Data Theft via Fake Claude AI Ads, Report

Researchers at Oasis Security identified "Claudy Day," a chain of three vulnerabilities in Claude AI that enables data theft through fake Google Ads, hidden prompt injection, and abuse of the Anthropic Files API. The attack leverages an open redirect flaw to create legitimate-looking search results, embeds hidden instructions in pre-filled chat URLs, and exfiltrates stolen data via an official API feature, creating a complete attack pipeline from victim delivery to silent data exfiltration.

Summary

Researchers at Oasis Security identified "Claudy Day," a chain of three vulnerabilities in Claude AI that enables data theft through fake Google Ads, hidden prompt injection, and abuse of the Anthropic Files API. The attack leverages an open redirect flaw to create legitimate-looking search results, embeds hidden instructions in pre-filled chat URLs, and exfiltrates stolen data via an official API feature, creating a complete attack pipeline from victim delivery to silent data exfiltration.

Full text

Security Artificial Intelligence“Claudy Day” Flaws Allow Data Theft via Fake Claude AI Ads, ReportbyDeeba AhmedMarch 18, 20263 minute read Researchers detail “Claudy Day” flaws in Claude AI that could enable data theft using fake Google Ads, hidden prompts, and built-in features. Cybersecurity researchers have identified a new way that could be exploited by hackers to bypass the safety systems of the popular AI assistant, Claude AI. The discovery, named Claudy Day by the team at Oasis Security, reveals how three separate cracks in the platform’s security can be chained together to quietly steal a user’s private information. The Hidden Message in the Link The first step of the attack involves the way we start new chats. For your information, Claude allows users to click a link that automatically fills the chat box with a greeting. However, researchers noted that they could hide secret orders inside these links using HTML tags, the basic code used to build websites. When a user clicks one of these rigged links, they might only see a simple word like Summarize in the text box. But the AI actually reads hidden instructions tucked inside the code that the person cannot see, which is a technique known as prompt injection. It basically tricks the AI into following a hacker’s command instead of the user’s, which could be anything, like telling the AI to scan old chats for sensitive details about your health or finances. According to researchers, this allows an attacker to “embed hidden instructions in a pre-filled chat URL that the user cannot see but that the agent fully processes.” A Search Result You Can Trust? You might wonder how an attacker gets someone to click a weird link in the first place. As per Oasis Security’s technical report, the hackers don’t need to send dodgy emails, as they used a flaw on the claude.com website to create Google Search ads that look 100% official. We generally trust the top results on Google; by using an open redirect vulnerability, the attackers made links that technically started with the trusted Claude web address, so Google approved the ads. This created a trap with “no phishing emails, no suspicious links, just a normal-looking search result,” allowing targeted victim delivery. The victim remains unsuspicious because the URL belongs to a reputable company. The Quiet Escape The final piece of the puzzle is how the data leaves the building, a process called data exfiltration. Even though Claude has a digital sandbox, researchers found a loophole in the system’s official beta feature- Anthropic Files API, where they could force the AI to upload stolen summaries to an attacker-controlled account. This system allows for huge amounts of data to be moved, up to 500 MB per file and 100 GB per organisation, and researchers noted that this creates “a complete attack pipeline, from targeted victim delivery to silent data exfiltration.” Oasis Security shared these findings with Anthropic through a responsible disclosure program. While the prompt injection issue is now fixed, the team suggested that users should carefully monitor approval before an AI agent uses powerful tools for the first time, ensuring they remain in control. The Google Ads setup hides a third-party redirect behind a trusted claude.com preview. Clicking the official-looking ad secretly sends users to the attacker’s target. Gmail ads mask a third-party redirect behind a trusted claude.com preview. Clicking the official-looking ad secretly sends users to the attacker’s target. (Source: Oasis Security) Experts’ Comments: Sharing his thoughts on the matter with Hackread.com, Andrew Bolster, Senior R&D Manager at Black Duck, noted that these findings support the sentiment that while assistants like Claude are a boon, they represent a risk called the “Lethal Trifecta.” “That’s where agents are exposed to untrusted content (in this case, the URL parameter injection), access to private data, and the ability to externally communicate,” Bolster said. He added that security leaders must prevent AI assistants from being “socially engineered into giving out sensitive or protected information or access.” Also sharing exclusive comments with Hackread.com, Saumitra Das, Vice President of Engineering at Qualys, stated, “The Claudy Day attack chain highlights a new reality: the prompt itself is now an attack surface.” He noted that because the attack uses legitimate endpoints and redirects, it looks like normal traffic. “AI agents need to be treated like privileged service accounts, with strict controls over what they can access, what tools they can use, and where data can be sent,” Das concluded, warning that users are currently “dangerously skipping permission checks” to avoid interrupting the AI. Deeba Ahmed Deeba is a veteran cybersecurity reporter at Hackread.com with over a decade of experience covering cybercrime, vulnerabilities, and security events. Her expertise and in-depth analysis make her a key contributor to the platform’s trusted coverage. View Posts AIClaudeClaude AIClaudy DayCybersecurityFraudGoogle AdsScamVulnerability Leave a Reply Cancel reply View Comments (0) Related Posts Hacking News Security Researcher reports how to hack Facebook account with Oculus Integration How to hack a Facebook account is something that almost everyone wants to know – And now, a… byWaqas Security Android Apple News Google News Samsung Technology Hackers Earn $215,000 for Hacking Nexus 6P, iPhone 6S Tencent Keen Security Lab Team Hackers Win $215,000 for Infecting a Fully Updated and Patched Nexus 6P. Challenging White… byAgan Uzunovic Security Social Media Social media giant Facebook suffers data breach AGAIN Another day, another data breach at Facebook - Roughly 100 App Developers Retained Unauthorized Access to Group Members’ Data. byWaqas Security Google Microsoft Technology Google Drive accounted for 50% of malicious Office document downloads OneDrive was responsible for 19% while 15% of malicious Microsoft Office documents were downloaded through Sharepoint in 2021.… byDeeba Ahmed

Indicators of Compromise

  • mitre_attack — T1598.003
  • mitre_attack — T1059.004
  • mitre_attack — T1041