North Korean Hackers Stole $12M Using AI Coding Tools

Key Takeaways

- Unskilled North Korean hackers used AI tools from OpenAI, Cursor, and Anima to build an entire malware campaign
- The group stole $12 million in cryptocurrency from over 2,000 victims in three months
- AI tools are lowering the barrier for cybercrime, enabling attacks that would otherwise require skilled developers
AI Tools Turn Amateur Hackers Into Effective Criminals
A group of North Korean hackers with limited coding skills managed to steal $12 million in cryptocurrency over three months. Their secret weapon: AI tools built by American companies.
Cybersecurity firm Expel revealed Wednesday that a state-sponsored group it calls HexagonalRodent used AI tools from OpenAI, Cursor, and Anima to build nearly every component of their attack. The hackers 'vibe coded' their malware, phishing websites, and fake company infrastructure. They installed credential-stealing malware on more than 2,000 computers.
The discovery came from Marcus Hutchins, the security researcher who stopped the WannaCry ransomware attack in 2017. WannaCry was also created by North Korean hackers. Hutchins now works at Expel.
“These operators don't have the skills to write code. They don't have the skills to set up infrastructure. AI is actually enabling them to do things that they otherwise just would not be able to do.”
— Marcus Hutchins, security researcher at Expel
How the Attack Worked
HexagonalRodent targeted developers working on small cryptocurrency launches, NFT creation, and Web3 projects. The group created fake tech companies with convincing websites, built using AI web design tools.
Victims received fraudulent job offers from these fake companies. As part of the interview process, they were asked to download and complete a coding assignment. The assignment was infected with malware that stole credentials, including keys to cryptocurrency wallets.
The social engineering was effective. The technical execution was sloppy. The hackers left parts of their infrastructure unsecured, exposing the AI prompts they used to generate their malware. This gave Expel a clear view into how much the operation relied on ChatGPT and Cursor.

The Democratization of Cybercrime
Security researchers have long worried about AI tools automating vulnerability discovery, creating a future where anyone could find exploits in any software. That dystopia has not arrived. What has arrived is simpler and more immediate: AI tools are making bad hackers good enough.
HexagonalRodent did not discover zero-day vulnerabilities. They did not create sophisticated, hard-to-detect malware. They built functional attack infrastructure despite lacking the skills to do so manually. That was enough to steal $12 million.
The implications extend beyond state-sponsored groups. If North Korean operators with limited technical skills can run effective campaigns using consumer AI tools, so can anyone else with motivation and a target list.
US AI Companies in the Crosshairs
The attack highlights a policy tension. OpenAI, Cursor, and Anima are US-based companies. Their tools were used by a sanctioned foreign government to steal from American citizens and companies working in the crypto space.
AI providers have content policies prohibiting malicious use. Enforcement is reactive. By the time HexagonalRodent's prompts were discovered, the $12 million was already gone.
Understanding the AI tool landscape that attackers are exploiting
What Organizations Can Do
The attack vector is old: fake job offers with malicious attachments. The execution is new. Organizations working in crypto and Web3 should treat unsolicited job offers with heightened suspicion, especially when they involve downloading code.
- Verify companies independently before engaging with recruiters
- Run any code assignments in isolated virtual environments
- Treat credential theft as a given and implement hardware security keys
- Monitor for unauthorized access to cryptocurrency wallet keys
The broader lesson is that AI tools have shifted the economics of cybercrime. Attacks that once required skilled developers can now be assembled by operators who understand targeting and social engineering but lack technical depth. Defenses need to account for this expanded threat surface.
Logicity's Take
Frequently Asked Questions
How did North Korean hackers use AI to steal cryptocurrency?
The group used ChatGPT, Cursor, and other AI tools to write malware, build fake company websites, and set up attack infrastructure. They targeted crypto developers with fake job offers that included malware-infected coding assignments.
How much did the North Korean hackers steal using AI tools?
The group stole $12 million in cryptocurrency over three months by compromising more than 2,000 computers.
Can AI tools be used to create malware?
Yes. While AI providers prohibit malicious use, the HexagonalRodent case shows that unskilled operators can use consumer AI tools to generate functional malware and attack infrastructure.
Who discovered the North Korean AI hacking campaign?
Marcus Hutchins, a security researcher at Expel who previously stopped the WannaCry ransomware attack, discovered the HexagonalRodent campaign.
What industries were targeted in this AI-assisted attack?
The attackers specifically targeted developers working on small cryptocurrency launches, NFT creation, and Web3 projects.
Need Help Implementing This?
Source: Feed: Artificial Intelligence Latest / Andy Greenberg
Huma Shazia
Senior AI & Tech Writer
Related Articles
Browse allZuckerberg's Superintelligence Lab Faces Setback
The first AI model from Zuckerberg's superintelligence lab has failed to impress compared to its rivals, sparking concerns about the lab's direction. We take a closer look at what happened and why it matters.

Muse Spark Launch Propels Meta AI App to Top 5
The recent launch of Muse Spark has significantly boosted the popularity of Meta AI app, pushing it into the top 5. We explore what this means for the AI landscape.

Meta's Muse Spark AI Model Lags Behind ChatGPT and Claude
Meta's Muse Spark AI model still can't outperform ChatGPT and Claude in key areas, despite its advancements. We explore what this means for the AI landscape.

Meta Launches Muse Spark AI To Challenge ChatGPT
Meta launches Muse Spark AI to challenge ChatGPT and Claude, we explore what this means for the AI landscape. Muse Spark AI is a significant development in the AI chatbot space.
Also Read
How to Get Google Maps Monthly Travel Recaps
Google Maps offers monthly travel recaps through its Timeline feature, but the experience has changed since 2024. Here's how to enable notifications and what privacy trade-offs you should know about.

Samsung Engineer Gets 7 Years for Selling Chip Secrets to China
A South Korean court sentenced a former Samsung researcher to seven years in prison for leaking 10nm DRAM manufacturing secrets to Chinese chipmaker CXMT. The engineer received $2 million for technology Samsung spent $1.08 billion developing over five years.

Your Loud PC Might Be a Setup Problem, Not a Hardware One
A tech journalist spent money on new fans only to discover her PC noise issues stemmed from case placement and airflow mistakes. The fix cost nothing. Here's what she learned about diagnosing noise before buying replacements.