Back to Feed
Zero-dayMay 11, 2026

Google: Hackers used AI to develop zero-day exploit for web admin tool

Google reports hackers used AI to develop zero-day exploit for web admin tool.

Summary

Researchers at Google Threat Intelligence Group discovered a zero-day exploit targeting an unnamed open-source web administration tool that was likely generated using AI to bypass two-factor authentication. Analysis of the Python exploit code revealed characteristics typical of large language model output, including educational docstrings and a hallucinated CVSS score. The finding demonstrates that threat actors increasingly leverage AI for vulnerability discovery and exploitation, with additional evidence linking Chinese, North Korean, and Russian state-sponsored groups to AI-assisted attack development.

Full text

Google: Hackers used AI to develop zero-day exploit for web admin tool By Bill Toulas May 11, 2026 09:02 AM 0 Researchers at Google Threat Intelligence Group (GTIG) say that a zero-day exploit targeting a popular open-source web administration tool was likely generated using AI. The exploit could be leveraged to bypass the two-factor authentication (2FA) protection in a popular open-source, web-based system administration tool that remains unnamed. Although the attack was foiled before the mass exploitation phase, the incident shows that threat actors are relying more on AI assistance for their vulnerability discovery and exploitation efforts. Based on the structure and content of the Python exploit code, Google has high confidence that the adversary used an AI model to find and weaponize the vulnerability. "For example, the script contains an abundance of educational docstrings, including a hallucinated CVSS score, and uses a structured, textbook Pythonic format highly characteristic of LLMs training data," GTIG says in a report today. The large language model (LLM) used for the malicious task remains unclear, but Google rules out the possibility that Gemini was involved in the process. Additional evidence suggesting the use of LLM tools in the discovery process is the nature of the flaw - a high-level semantic logic bug that AI systems excel at identifying, rather than memory corruption or input sanitization issues typically uncovered through fuzzing or static analysis. Source: Google Google notified the software developer about the significant threat and timely action to disrupt the attack. “For the first time, GTIG has identified a threat actor using a zero-day exploit that we believe was developed with AI,” GTIG researchers say. Apart from this case, Google notes that Chinese and North Korean hackers, such as APT27, APT45, UNC2814, UNC5673, and UNC6201, have been using AI models for vulnerability discovery and exploit development, continuing the trend observed in the February report. Russia-linked actors were also observed using AI-generated decoy code to obfuscate malware such as CANFAIL and LONGSTREAM. CANFAIL code comments for the decoy logicSource: Google Google has also highlighted a Russian operation codenamed “Overload,” where social engineering threat actors used AI voice cloning to impersonate real journalists in fake videos promoting the anti-Ukraine narrative. The PromptSpy backdoor for Android, documented by ESET earlier this year, is also highlighted in Google’s report for its integration with Gemini APIs for autonomous device interaction. However, Google also found an autonomous agent module named "GeminiAutomationAgent" that uses a hardcoded prompt to enable the malware to interact with the device in an automated way. According to the researchers, the role of the prompt is to assign a benign persona so it can bypass the LLM's safety features. The goal is to calculate the geometry of the user interface bounds, which PromptSpy could use to interact with the device in multiple ways. Furthermore, the malware makes use of AI-based capabilities to replay authentication on the device, be it in the form of a lock pattern or a PIN, Google researchers say. The company is warning that threat actors are now industrializing access to premium AI models using automated account creation, proxy relays, and account-pooling infrastructure. 99% of What Mythos Found Is Still Unpatched. AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what's exploitable, proves controls hold, and closes the remediation loop. Claim Your Spot Related Articles: Critical cPanel and WHM bug exploited as a zero-day, PoC now availableDisgruntled researcher leaks “BlueHammer” Windows zero-day exploit13-year-old bug in ActiveMQ lets hackers remotely execute commandsMax severity Flowise RCE vulnerability now exploited in attacksCISA: New Langflow flaw actively exploited to hijack AI workflows

Indicators of Compromise

  • malware — PromptSpy
  • malware — CANFAIL
  • malware — LONGSTREAM
  • mitre_attack — T1556.004

Entities

Google (vendor)APT27 (threat_actor)APT45 (threat_actor)UNC2814 (threat_actor)UNC5673 (threat_actor)UNC6201 (threat_actor)