كل المقالات
AI & Machine Learning

North Korean Hackers Stole $12M Using AI Coding Tools

Huma Shazia22 April 2026 at 11:58 pm5 دقيقة للقراءة
North Korean Hackers Stole $12M Using AI Coding Tools

Key Takeaways

North Korean Hackers Stole $12M Using AI Coding Tools
Source: Feed: Artificial Intelligence Latest
  • Unskilled North Korean hackers used AI tools from OpenAI, Cursor, and Anima to build an entire malware campaign
  • The group stole $12 million in cryptocurrency from over 2,000 victims in three months
  • AI tools are lowering the barrier for cybercrime, enabling attacks that would otherwise require skilled developers

AI Tools Turn Amateur Hackers Into Effective Criminals

A group of North Korean hackers with limited coding skills managed to steal $12 million in cryptocurrency over three months. Their secret weapon: AI tools built by American companies.

Cybersecurity firm Expel revealed Wednesday that a state-sponsored group it calls HexagonalRodent used AI tools from OpenAI, Cursor, and Anima to build nearly every component of their attack. The hackers 'vibe coded' their malware, phishing websites, and fake company infrastructure. They installed credential-stealing malware on more than 2,000 computers.

$12 million
Amount stolen by AI-assisted North Korean hackers in just three months, targeting crypto developers

The discovery came from Marcus Hutchins, the security researcher who stopped the WannaCry ransomware attack in 2017. WannaCry was also created by North Korean hackers. Hutchins now works at Expel.

These operators don't have the skills to write code. They don't have the skills to set up infrastructure. AI is actually enabling them to do things that they otherwise just would not be able to do.

— Marcus Hutchins, security researcher at Expel

How the Attack Worked

HexagonalRodent targeted developers working on small cryptocurrency launches, NFT creation, and Web3 projects. The group created fake tech companies with convincing websites, built using AI web design tools.

Victims received fraudulent job offers from these fake companies. As part of the interview process, they were asked to download and complete a coding assignment. The assignment was infected with malware that stole credentials, including keys to cryptocurrency wallets.

The social engineering was effective. The technical execution was sloppy. The hackers left parts of their infrastructure unsecured, exposing the AI prompts they used to generate their malware. This gave Expel a clear view into how much the operation relied on ChatGPT and Cursor.

The hackers exposed prompts showing how they used AI tools to generate malware code
The hackers exposed prompts showing how they used AI tools to generate malware code

The Democratization of Cybercrime

Security researchers have long worried about AI tools automating vulnerability discovery, creating a future where anyone could find exploits in any software. That dystopia has not arrived. What has arrived is simpler and more immediate: AI tools are making bad hackers good enough.

HexagonalRodent did not discover zero-day vulnerabilities. They did not create sophisticated, hard-to-detect malware. They built functional attack infrastructure despite lacking the skills to do so manually. That was enough to steal $12 million.

The implications extend beyond state-sponsored groups. If North Korean operators with limited technical skills can run effective campaigns using consumer AI tools, so can anyone else with motivation and a target list.

US AI Companies in the Crosshairs

The attack highlights a policy tension. OpenAI, Cursor, and Anima are US-based companies. Their tools were used by a sanctioned foreign government to steal from American citizens and companies working in the crypto space.

AI providers have content policies prohibiting malicious use. Enforcement is reactive. By the time HexagonalRodent's prompts were discovered, the $12 million was already gone.

Also Read
8 ChatGPT Alternatives Worth Testing in 2026

Understanding the AI tool landscape that attackers are exploiting

What Organizations Can Do

The attack vector is old: fake job offers with malicious attachments. The execution is new. Organizations working in crypto and Web3 should treat unsolicited job offers with heightened suspicion, especially when they involve downloading code.

  • Verify companies independently before engaging with recruiters
  • Run any code assignments in isolated virtual environments
  • Treat credential theft as a given and implement hardware security keys
  • Monitor for unauthorized access to cryptocurrency wallet keys

The broader lesson is that AI tools have shifted the economics of cybercrime. Attacks that once required skilled developers can now be assembled by operators who understand targeting and social engineering but lack technical depth. Defenses need to account for this expanded threat surface.

ℹ️

Logicity's Take

Frequently Asked Questions

How did North Korean hackers use AI to steal cryptocurrency?

The group used ChatGPT, Cursor, and other AI tools to write malware, build fake company websites, and set up attack infrastructure. They targeted crypto developers with fake job offers that included malware-infected coding assignments.

How much did the North Korean hackers steal using AI tools?

The group stole $12 million in cryptocurrency over three months by compromising more than 2,000 computers.

Can AI tools be used to create malware?

Yes. While AI providers prohibit malicious use, the HexagonalRodent case shows that unskilled operators can use consumer AI tools to generate functional malware and attack infrastructure.

Who discovered the North Korean AI hacking campaign?

Marcus Hutchins, a security researcher at Expel who previously stopped the WannaCry ransomware attack, discovered the HexagonalRodent campaign.

What industries were targeted in this AI-assisted attack?

The attackers specifically targeted developers working on small cryptocurrency launches, NFT creation, and Web3 projects.

ℹ️

Need Help Implementing This?

Source: Feed: Artificial Intelligence Latest / Andy Greenberg

H

Huma Shazia

Senior AI & Tech Writer

اقرأ أيضاً

رأي مغاير: كيف يؤثر اختراق الأمن الداخلي الأميركي على شركاتنا الخاصة؟
الأمن السيبراني·8 د

رأي مغاير: كيف يؤثر اختراق الأمن الداخلي الأميركي على شركاتنا الخاصة؟

في ظل اختراق عقود الأمن الداخلي الأميركي مع شركات خاصة، نناقش تأثير هذا الاختراق على مستقبل الأمن السيبراني. نستعرض الإحصاءات الموثوقة ونناقش كيف يمكن للشركات الخاصة أن تتعامل مع هذا التهديد. استمتع بقراءة هذا التحليل العميق

عمر حسن·
الإنسان في زمن ما بعد الوجود البشري: نحو نظام للتعايش بين الإنسان والروبوت - Centre for Arab Unity Studies
الروبوتات·8 د

الإنسان في زمن ما بعد الوجود البشري: نحو نظام للتعايش بين الإنسان والروبوت - Centre for Arab Unity Studies

في هذا المقال، سنناقش كيف يمكن للبشر والروبوتات التعايش في نظام متكامل. سنستعرض التحديات والحلول المحتملة التي تضعها شركات مثل جوجل وأمازون. كما سنلقي نظرة على التوقعات المستقبلية وفقًا لتقرير ماكنزي

فاطمة الزهراء·
إطلاق ناسا لمهمة مأهولة إلى القمر: خطوة تاريخية نحو استكشاف الفضاء
أخبار التقنية·7 د

إطلاق ناسا لمهمة مأهولة إلى القمر: خطوة تاريخية نحو استكشاف الفضاء

تعتبر المهمة الجديدة خطوة هامة نحو استكشاف الفضاء وتطوير التكنولوجيا. سوف تشمل المهمة إرسال رواد فضاء إلى سطح القمر لconducting تجارب علمية. ستسهم هذه المهمة في تطوير فهمنا للفضاء وتحسين التكنولوجيا المستخدمة في استكشاف الفضاء.

عمر حسن·