Back to Feed
AI SecurityMar 27, 2026

Security leaders say the next two years are going to be ‘insane’

Security leaders warn AI will discover vulnerabilities exponentially faster than organizations can patch them over next

Summary

At RSA Conference 2026, security executives Kevin Mandia, Morgan Adamski, and Alex Stamos warned that AI systems are discovering vulnerabilities at an exponential rate while exploit development and autonomous network penetration capabilities are advancing rapidly, creating an asymmetric advantage for attackers. The leaders predict a 2-3 year period of unprecedented disruption where AI-generated exploits become commodity tools and foundation model vulnerabilities flood the threat landscape faster than remediation can occur. Most organizations lack the speed and infrastructure to defend against this shift, rendering traditional security practices increasingly obsolete.

Full text

SAN FRANCISCO — Every RSA Conference has its buzzwords. Cloud. Ransomware. Zero trust. Plastered across the 87-acre Moscone Center complex on every booth, banner and bar. This year was AI, with vendors pitching AI-powered solutions to every security problem imaginable. But 2026 stood out for a different reason: Industry leaders spent the conference warning about disruption from the very technology everyone was selling. In an exclusive discussion with CyberScoop at this year’s conference, Kevin Mandia, founder of AI security company Armadin, Morgan Adamski, former executive director of U.S. Cyber Command, and Alex Stamos, a researcher and former chief security officer at several major technology companies, said the industry is entering what they described as an unprecedented two- to three-year period of upheaval, driven by AI systems that are discovering vulnerabilities exponentially faster than defenders can respond and threatening to render decades of security practices obsolete.“We are just at the inflection point that is going to be pretty insane, at least two to three years,” Stamos said, describing a near-term future in which AI systems flood the threat landscape with working exploits while organizations struggle to patch vulnerabilities faster than attackers can weaponize them. Mandia put the timeline more bluntly. “It’s a perfect storm for offense over the next year or two,” he said. The core problem, according to the executives, is speed. AI has made vulnerability discovery almost trivial, while remediation takes time and effort, creating a widening gap that favors attackers across every stage of the kill chain. “Because of the asymmetry in the cyber domain, where one person on offense can create work for millions of defenders, speed leverages that asymmetry,” Mandia said. “In the near term, there’s an advantage to the attackers as they start to use models and agents to do a lot of the offense.” Bug discovery goes exponential The shift is already underway. Stamos, who is currently chief security officer at Corridor, said foundation model companies are sitting on thousands of bugs discovered through AI-assisted analysis that they lack the capacity to verify or patch. “The exploit discovery has gone exponential,” Stamos said. “What we haven’t seen go exponential yet is plugging that into working shellcode that bypasses protections on modern processors. But maybe six months or a year from now” AI will be generating sophisticated exploits on demand. He pointed to examples of AI systems discovering vulnerabilities in decades-old code that had been reviewed by thousands of developers and professional security researchers. In one case, he said, an AI system identified a flaw in foundational Linux kernel code that humans had overlooked for years. “This superintelligent system was able to figure out a way to manipulate the machine into a place that, when you look at the bug, I’m not sure how a human could have found that,” Stamos said. The pace of discovery is creating what Stamos called “a massive collective action problem.” Each successive generation of AI models could surface hundreds of new vulnerabilities in the same foundational software. “It’s quite possible that all this development we’ve done in memory-unsafe languages, without formal methods, that none of that is actually secure in the presence of superintelligent bug-finding machines,” he said. “In which case we need to be massively rebuilding the base infrastructure we all work on. And nobody is doing that.” The timeline for when those capabilities become widely accessible is measured in months. When Chinese open-source models, like DeepSeek or Alibaba’s Qwen, reach current American foundation model capability levels, Stamos said, “you’re going to have every 19-year-old in St. Petersburg with the same capability” as elite vulnerability researchers. Models trained on existing shellcode are already “reasonably good” at generating exploit code, he said, and may be capable of producing EternalBlue-level exploits within a year. That NSA-developed exploit, leaked in 2017, was used in the WannaCry and NotPetya attacks and remained effective for years because of how difficult such capabilities were to develop. “Imagine when that becomes available on demand,” Stamos said. Agents already operating beyond human scale Mandia’s company Armadin has built AI agents capable of autonomous network penetration that he said would be devastating if deployed maliciously. Unlike human attackers who must manually type commands and wait for results, AI agents operate across hundreds of threads simultaneously, interpolating command outputs before they arrive and launching follow-on actions in microseconds. “The scale and scope and total recall of an AI agent compromising you and swarming you is not humanly comprehensible,” said Mandia, who founded Mandiant and served as CEO from 2016 to 2024. “If the old way was a red team that would get in, there’s a human on a keyboard typing commands. That’s a joke compared to” what AI agents can do. Those agents can evade endpoint detection and response systems in under an hour, he said, and operate at human speed to avoid rate-limiting detection mechanisms. Once inside a network, an AI agent can analyze documentation, packet captures and technical manuals faster than humans can read them, designing attacks tailored to specific control systems on the fly. “When you build the offense, it scares the heck out of you,” Mandia said. “If we let the animal out of the cage today, nobody’s ready for it.” He said Armadin recently tested a Fortune 150 company with a strong security team and found either remote code execution vulnerabilities or data leakage paths in every application tested. “Both of us were shocked,” he said. The shift changes the fundamental question boards ask after penetration tests. Historically, directors wanted to know the probability a demonstrated attack would occur in the real world. “In the age of humans, you could never really answer,” Mandia said. “But with AI, it’s 100 percent. It’s coming and it’s going to get cheaper and more effective at the same time.” Defenders face impossible timelines The compression of attack timelines is colliding with organizational realities that are moving in the opposite direction. Adamski, who is now the U.S. lead for PwC’s Cyber, Data & Technology Risk business, said chief information security officers face pressure from boards to adopt AI rapidly, often with explicit goals of reducing headcount, even as compliance requirements remain unchanged and the threat landscape accelerates. “CISOs are getting squeezed in that they cannot stop adoption because of demand from the board, from the CEO,” Adamski said. “None of the SOC 2 requirements have changed. ISO 27000, anything that helps people get through from a compliance perspective, all those rules are exactly the same.” Stamos said patch cycles illustrate the mismatch. Where previously only sophisticated adversaries could reverse-engineer Microsoft’s Patch Tuesday updates to develop exploits, AI will democratize that capability. “You’re going to be able to drop the patch into Ghidra, driven by an agent, and come up with [an exploit],” he said. “Patch Tuesday, exploit Wednesday.” Many CISOs are trying to bolt AI capabilities onto existing security operations, an approach the executives said is insufficient. “They’re not stepping back and looking at the bigger picture, that we have a fundamental, much more holistic problem in terms of how to reimagine and redo an entire cyber defense ecosystem that is solely driven by AI machine to machine,” Adamski said. Avoiding Pandora’s box The national security implications compound the problem. While other former government leaders talked at the conference about what they saw as the United States’ slipping in offensive cybersecurity, the three industry leaders spoke to what they believe nation-states have developed with the use of AI. “I thi

Indicators of Compromise

  • malware — WannaCry
  • malware — NotPetya
  • malware — EternalBlue