[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"$frF7NGNsUZkl278eqJpyLLewKT1Oo4atEm_-BmdxnjXE":3},{"article":4,"iocs":50},{"id":5,"title":6,"slug":7,"summary":8,"ai_summary":9,"brief":10,"full_text":11,"url":12,"image_url":13,"published_at":14,"ingested_at":15,"relevance_score":16,"entities":17,"category_id":29,"category":30,"article_tags":34},"f50fe97a-699f-4cc6-84f6-22874d15f34d","Scammers Use Hidden Text to Bypass AI Email Filters in Phishing Scams","scammers-use-hidden-text-to-bypass-ai-email-filters-in-phishing-scams-8eddd5","Scammers are hiding invisible text inside phishing emails to manipulate AI-powered email filters and increase the chances of scams reaching inboxes.","Security researchers at Sublime Security have identified a growing trend where threat actors embed invisible text within phishing emails to manipulate AI-powered email filters, including systems like the Autonomous Security Analyst (ASA). The technique, known as indirect prompt injection, uses zero-font HTML or color-matching text to dilute malicious signals and force AI models to misclassify scam emails as legitimate. Two real-world campaigns—an Adidas Newsletter Clone and a healthcare insurance scam—have been observed employing these methods, with researchers warning that although currently representing less than 1% of traffic, the threat will escalate as AI assistants gain more autonomous capabilities.","Scammers use hidden text in phishing emails to bypass AI-powered email filters.","Security Phishing Scam Scams and FraudScammers Use Hidden Text to Bypass AI Email Filters in Phishing Scams Scammers are hiding invisible text inside phishing emails to manipulate AI-powered email filters and increase the chances of scams reaching inboxes. byDeeba AhmedMay 7, 20263 minute read Sublime Security has released a new analysis detailing a growing trend in email-based cyberattacks: a technique called indirect prompt injection. While social media is often full of people trying to trick chatbots with “ignore previous instructions” prompts, this new research shows how scammers are using that same logic to trick the AI models used in email filters. The goal is to force these systems into misclassifying malicious emails as safe and legitimate. This research, shared exclusively with Hackread.com, shows that threat actors aren’t solely interested in fooling humans but are actively targeting the Machine Learning (ML) models that guard mailboxes, such as the Autonomous Security Analyst (ASA). Breaking the AI Decision Cycle According to researchers, indirect prompt injection works by stuffing phishing emails with carefully chosen content designed to influence the decision-making of a model. By embedding benign text, like archived newsletter copy from Adidas or snippets of romance novels, hackers can effectively “dilute” the malicious signals of a phishing link. Researchers found that attackers hide this extra data in two main ways: Zero-font HTML: Setting text to 0pt so it remains invisible to people but fully readable by an AI scanner. Color-matching: Setting text to the same hex code as the email background. If the hidden content is rich enough, the ASA might misclassify a scam as a harmless marketing update. Real-World Detections Further investigation by Sublime revealed two specific campaigns that used these methods. The first is the Adidas Newsletter Clone, which appears as a standard cloud storage scam used for phishing, but the HTML contained hidden text fetched directly from real Adidas newsletters found on sites like milled.com and emailinspire.com. The objective was to make the AI model see a high-reputation brand instead of a malicious link. Then there is the healthcare-related scam, where a fake health insurance email used a professional layout with rounded boxes and accented buttons. To create a sense of legitimacy for the filters, scammers embedded a fictional story from goodnovel.com. Researchers noted the hackers likely hoped the AI model would mistake the message for a creative post from a platform like Substack or Patreon. Cloud storage scam and Health insurance scam (Source: Sublime Security) A Serious Threat While the cybersecurity community has been testing these theories for months using tools like Lakera’s Gandalf game, these ‘in the wild’ attacks prove that scammers are advancing fiercely in exploiting AI-powered mechanisms. Sublime noted that although these attacks currently represent less than 1% of seen traffic, they are a looming threat that signals what is to come. As we move toward ‘agentic mailboxes,’ where AI assistants take actions on our behalf, the risk of a model following a hidden malicious instruction becomes much higher. “With indirect prompt injection via hidden text, attackers aren’t trying to force an AI into doing something it shouldn’t. Instead, they’re influencing AI into making an incorrect decision, but well within the bounds of its design. Nuanced prompt injection attacks will only increase over time as adversaries evolve, so it’s important that AI security tools can understand the full context of the messages they analyze,” the blog post reads. Researchers conclude that the underlying mechanism of these models must be improved to understand the full context of a message, rather than just looking at surface-level links or keywords. Deeba Ahmed Deeba is a veteran cybersecurity reporter at Hackread.com with over a decade of experience covering cybercrime, vulnerabilities, and security events. Her expertise and in-depth analysis make her a key contributor to the platform’s trusted coverage. View Posts AdidasCybersecurityFraudMachine LearningPhishingScamSublime SecurityVulnerability Leave a Reply Cancel reply View Comments (0) Related Posts Security 6 Tips to Protect Your Online Business from Cyber Attacks Cybercrime is on the rise, there is no denying this fact. With people becoming more reliant on technology,… byCarolina Scams and Fraud Malware Security DDoS App Meant to Hit Russia Infected Android Phones of Ukrainian Activists The pro-Ukraine groups thought they were fighting back against Russia with a new DDoS app, but it turns… byDeeba Ahmed Read More Security Apple iPad iPhone Apple Pushes Rare iOS 18 Patch for Devices at Risk from DarkSword Exploit Apple pushes rare iOS 18 security patch to protect devices at risk from the DarkSword exploit, urging users to update or move to iOS 26 for stronger protection. byWaqas Censorship Cyber Events Security Pakistan Government Bans Online Political Forum Siasat.pk If you want to silence someone’s voice; welcome to Pakistan where the government puts a ban on platforms… byWaqas","https:\u002F\u002Fhackread.com\u002Fscammers-text-bypass-ai-email-filters-phishing-scams\u002F","https:\u002F\u002Fhackread.com\u002Fwp-content\u002Fuploads\u002F2026\u002F05\u002Fscammers-text-bypass-ai-email-filters-phishing-scams.png","2026-05-07T10:22:40+00:00","2026-05-07T12:00:10.519385+00:00",8,[18,21,24,26],{"name":19,"type":20},"Sublime Security","vendor",{"name":22,"type":23},"Autonomous Security Analyst (ASA)","technology",{"name":25,"type":23},"Machine Learning Email Filters",{"name":27,"type":28},"Gandalf (Lakera)","product","89f78b1c-3503-45a1-9fc7-e23d2ce1c6d5",{"id":29,"icon":31,"name":32,"slug":33},null,"Malware","malware",[35,40,45],{"category":36},{"id":37,"icon":31,"name":38,"slug":39},"2c8f44d4-b56e-47cf-9677-04f22c9ee78d","Identity & Access","identity-access",{"category":41},{"id":42,"icon":31,"name":43,"slug":44},"839da5c1-3c34-47e2-9499-f7201640e3ac","AI Security","ai-security",{"category":46},{"id":47,"icon":31,"name":48,"slug":49},"e7b231c8-5f79-4465-8d38-1ef13aea5a14","Threat Intelligence","threat-intelligence",[51],{"type":33,"value":52,"context":53},"Indirect Prompt Injection via Hidden Text","Technique using zero-font HTML and color-matching to hide malicious content from AI scanners"]