Back to Feed
Nation-stateApr 22, 2026

AI Tools Are Helping Mediocre North Korean Hackers Steal Millions

North Korean hackers used AI tools to steal $12M in crypto from 2,000+ developers in three months.

Summary

Expel discovered HexagonalRodent, a North Korean state-sponsored cybercrime group, using AI tools from OpenAI, Cursor, and Anima to conduct a cryptocurrency theft campaign targeting developers. The group leveraged AI to write malware, create phishing websites, and automate tasks normally requiring advanced coding skills, stealing approximately $12 million in three months from over 2,000 victims. The AI-generated code contained telltale signs like emoji annotations and English comments, yet evaded detection in targets lacking endpoint security tools.

Full text

CommentLoaderSave StorySave this storyCommentLoaderSave StorySave this storyThe advent of AI hacking tools has raised fears of a near future in which anyone can use automated tools to dig up exploitable vulnerabilities in any piece of software, like a kind of digital intrusion superpower. Here in the present, however, AI seems to be playing a more mundane, if still concerning, role in hackers’ toolkit: It’s helping mediocre hackers level up and carry out broad, effective malware campaigns. That includes one group of relatively unskilled North Korean cybercriminals who’ve been discovered using AI to carry out virtually every part of an operation that hacked thousands of victims to steal their cryptocurrency.On Wednesday, cybersecurity firm Expel revealed what it describes as a North Korean state-sponsored cybercrime operation that installed credential-stealing malware on more than 2,000 computers, specifically targeting the machines of developers working on small cryptocurrency launches, NFT creation, and Web3 projects. By using the AI tools of US-based companies, including those of OpenAI, Cursor, and Anima, the hacker group—which Expel calls HexagonalRodent—“vibe coded” almost every part of its intrusion campaign, from writing their malware to building the fake websites of companies used in its phishing schemes. That AI-enabled hacking allowed the group to steal as much as $12 million in cryptocurrency from victims in three months.What’s most striking about the HexagonalRodent hacking campaign isn’t its sophistication, says Marcus Hutchins, the security researcher who discovered the group, but rather how AI tools allowed an apparently unsophisticated group to carry out a profitable theft spree in the service of the North Korean state.“These operators don't have the skills to write code. They don't have the skills to set up infrastructure. AI is actually enabling them to do things that they otherwise just would not be able to do,” says Hutchins, who became well-known in the cybersecurity community after disabling the WannaCry ransomware worm created by North Korean hackers.Emoji-Littered, AI-Written CodeHexagonalRodent’s hacking operation focused on tricking crypto developers with fraudulent job offers at tech firms, going so far as to create full websites for the fake companies recruiting the victims, often created with AI web design tools. Eventually, the victim was told they’d have to download and complete a coding assignment as a test—which the hackers had infected with malware that infiltrated their machine and stole credentials, including those that in some cases could grant access to the keys that controlled their crypto wallets.Those parts of the hacking operation appear to have been well-honed and effective, but the hackers were also clumsy enough to leave parts of their own infrastructure unsecured, leaking the prompts they used to write their malware with tools that included OpenAI’s ChatGPT and Cursor. They also exposed a database where they tracked victim wallets, which allowed Expel to estimate the total amount of cryptocurrency the hackers may have stolen. (While those wallets added up to $12 million in total contents, Hutchins says the company couldn’t confirm for each target whether the entire sum had already been drained from the wallets or if the hackers still needed to obtain keys to the victim wallets in some cases, given some may have been protected with hardware security tokens.)Hutchins also analyzed samples of the hackers’ malware and found other clues that it was largely—perhaps entirely—created with AI. It was thoroughly annotated with comments throughout—in English—hardly the typical coding habits of North Koreans, despite the fact that some command-and-control servers for the malware tied them to known North Korean hacking operations. The malware’s code was also littered with emojis, which Hutchins points out can, in some cases, serve as a clue that software was written by a large language model, given that programmers writing on a PC keyboard rather than a phone rarely take the time to insert emojis. “It's a pretty well-documented sign of AI-written code,” Hutchins says.The AI-written code Hutchins analyzed ought to have been detectable with typical “end point detection and response” security tools used in most companies and government agencies, Hutchins says, given that it followed standard patterns of behavior for malware. But Hutchins says HexagonalRodent’s decision to focus on individual victims in its hacking campaign meant many didn’t have those security tools installed. “They found a niche where you actually can get away with completely AI-generated malware,” says Hutchins.Hutchins argues that the HexagonalRodent campaign shows how AI may be an especially useful tool for North Korea, which can easily recruit unskilled IT workers to join its hacker ranks—or more commonly, to infiltrate tech companies while posing as citizens of other countries—but has a far more limited number of capable hackers, given the average North Korean’s lack of access to the internet or even computers. “They have hundreds of people being sent over the border to work in IT operations, and only a few of them really know what they're doing,” Hutchins says. “But then they're able to use generative AI to get a leg up and actually run fairly successful hacking campaigns.”In fact, rather than reduce the number of people involved in the hacking campaign through automation, Hutchins says he’s been able to observe North Korean operations grow in size over time. Expel estimates that as many as 31 individual hackers were involved in HexagonalRodent. “They just keep adding more and more operators,” Hutchins says. “Because they can just hand them access to an AI model, and they can now do things which they would have previously needed a development team to support.”A Hermit Kingdom, Embracing AIThe HexagonalRodent activity observed by Hutchins makes up only a small part of North Korea’s sweeping hacking and cybercriminal activity, which can involve vast cryptocurrency theft, ransomware, espionage, fraud, and infiltrating Western organizations through its IT worker schemes. Security researchers have likened North Korea's cyber operations to functioning like a “state-sanctioned crime syndicate,” which ultimately works to fund the nation’s nuclear weaponry, build the country’s infrastructure, and evade international sanctions.Increasingly, and perhaps unsurprisingly, these state-backed programs have been adding generative AI to their hacking and fraud workflows to improve their overall efficiency. Within North Korea, these efforts have reportedly been supported by the creation of Research Center 227, an organization sitting under the military’s Reconnaissance General Bureau that will partly focus on developing AI-focussed hacking tooling. But day-to-day, North Korea’s cyber operators have repeatedly been caught using commercial, off-the-shelf AI tools.“North Korea is using AI as a force multiplier, and it is helping with every aspect—building resumes, building websites, building exploits, testing vulnerabilities—and they're doing it at speed and scale,” says Michael “Barni” Barnhart, a researcher at security firm DTEX, who has tracked the country’s hacking operations for years. North Korean cyber operators have been experimenting and widely using AI for multiple years, Barnhart says. “AI is helping them move faster so that they can weaponize exploits and even help build those exploits,” he explains. “You get little pieces of the puzzle from each of the groups, and then it kind of forms a whole picture of how they're using AI.”For instance, members of North Korea’s IT worker programs have been using AI assistants and face-changing deepfakes to answer questions and change their appearance during fraudulent job interviews. Security researchers at Microsoft have spotted suspected North Korean operations using AI to create false IDs, research work tools, poli

Entities

HexagonalRodent (threat_actor)OpenAI (vendor)Cursor (vendor)Anima (vendor)Expel (vendor)Generative AI / Large Language Models (technology)