Back to Feed
Zero-dayMay 11, 2026

Google spotted an AI-developed zero-day before attackers could use it

Google detects AI-developed zero-day exploit before cybercrime group mass-exploitation campaign.

Summary

Google Threat Intelligence Group discovered a zero-day vulnerability that was developed using artificial intelligence and alerted the affected vendor before a prominent cybercrime group could launch a mass-exploitation campaign. Researchers found telltale artifacts in the exploit code—including Python documentation strings, excessive annotations, and a hallucinated CVSS score—that proved AI was heavily involved in its creation. This marks the first confirmed instance Google has observed of attackers using AI to develop zero-day exploits, though researchers believe this is likely just the beginning of a broader trend.

Full text

Google researchers found a zero-day exploit developed by artificial intelligence and alerted the susceptible vendor to the imminent threat before a well-known cybercrime group initiated a mass-exploitation campaign, the company said in a report released Monday. The averted disaster probably isn’t the first time attackers used AI to build a zero-day, but it is the first time Google Threat Intelligence Group found compelling evidence that this long-predicted and worrying escalation in vulnerability-exploit development is underway. “We finally uncovered some evidence this is happening,” John Hultquist, chief analyst at GTIG, told CyberScoop. “This is probably the tip of the iceberg and it’s certainly not going to be the last.” Google declined to identify the specific vulnerability, which has been patched, or name the “popular open-source, web-based administration tool” it affected. It did, however, note that the defect impacted a Python script that allows attackers to bypass two-factor authentication for the service. Researchers also withheld details about how they discovered the zero-day exploit or the cybercrime group that was preparing to use it for a large-scale attack spree. The threat group has a “strong record of high-profile incidents and mass exploitation,” Hultquist said, suggesting the attackers are prominent and well-known among cybersecurity practitioners. GTIG is fairly confident the threat group was using AI in a meaningful way throughout the entire process, but it has yet to determine if the technology also discovered the vulnerability it ultimately developed into an exploit. Whichever AI model the attackers used — Google is confident it wasn’t Gemini or Anthropic’s Mythos — left artifacts throughout the exploit code that are inconsistent with human developers. This evidence, which included documentation strings in Python, highly annotated code and a hallucinated but non-existent CVSS score, tipped Google off to the fact AI was heavily involved, Hultquist said. GTIG has been warning about and expecting AI-developed exploits to hit systems in the wild, especially after its Big Sleep AI agent found a zero-day vulnerability in late 2024. “I think the watershed moment was two years ago when we proved this was possible,” Hultquist said, adding that there are probably several other AI developed zero-days in play now. Yet, to him, the discovery of a zero-day exploit developed by AI is less concerning than what this single instance forebodes even further. “The game’s already begun and we expect the capability trajectory is pretty sharp,” Hultquist said. “We do expect that this will be a much bigger problem, that there will be more devastating zero-day attacks done over this, especially as capabilities grow.” Share Facebook LinkedIn Twitter Copy Link

Entities

Google (vendor)Google Threat Intelligence Group (GTIG) (product)Gemini (technology)Anthropic Mythos (technology)Big Sleep AI agent (technology)Unnamed prominent cybercrime group (threat_actor)