Google: Hackers Used AI to Build First Zero-Day Exploit

Key Takeaways

- Google identified the first known zero-day exploit likely developed using AI assistance
- The exploit targeted 2FA protection in an unnamed open-source web administration tool
- Chinese, North Korean, and Russian threat actors are increasingly using AI for vulnerability discovery
What Google Found
Researchers at Google Threat Intelligence Group (GTIG) have identified what they call the first zero-day exploit developed with AI assistance. The exploit targeted two-factor authentication in a popular open-source web administration tool. Google has not named the affected software.
The attack was stopped before reaching mass exploitation. But the finding confirms what security researchers have long worried about: threat actors are now using AI to find and weaponize vulnerabilities.
“For the first time, GTIG has identified a threat actor using a zero-day exploit that we believe was developed with AI.”
— Google Threat Intelligence Group
How Google Traced the AI Connection
Google's confidence in the AI attribution comes from analyzing the exploit's Python code. The researchers found several telltale signs of LLM-generated output.
The script contained an unusual number of educational docstrings, including a hallucinated CVSS score. It followed a textbook Pythonic format that matches patterns common in LLM training data.
“The script contains an abundance of educational docstrings, including a hallucinated CVSS score, and uses a structured, textbook Pythonic format highly characteristic of LLMs training data.”
— GTIG report
The vulnerability itself also pointed to AI involvement. It was a high-level semantic logic bug. These are the types of flaws AI systems excel at identifying. Traditional discovery methods like fuzzing or static analysis tend to uncover memory corruption or input sanitization issues instead.

Google ruled out Gemini as the model used. The specific LLM remains unknown.
State-Backed Groups Are Already Using AI
This case is not isolated. Google's report documents broader AI adoption among nation-state hackers.
Chinese groups APT27 and UNC5673, along with North Korean groups APT45, UNC2814, and UNC6201, have been using AI models for vulnerability discovery and exploit development. This continues a trend Google first documented in February.
Russian actors have taken a different approach. They use AI-generated decoy code to obfuscate malware strains called CANFAIL and LONGSTREAM. The AI writes plausible-looking code comments that disguise the malware's true purpose.

AI Voice Cloning and Autonomous Malware
Google also highlighted a Russian operation codenamed Overload. Social engineering actors used AI voice cloning to impersonate real journalists. The fake videos promoted anti-Ukraine narratives.
On mobile, the PromptSpy Android backdoor integrates with Gemini APIs for autonomous device interaction. ESET documented this malware earlier in 2026.
Google found an autonomous agent module called GeminiAutomationAgent within the malware. It uses a hardcoded prompt to assign a benign persona. This lets the malware bypass the LLM's safety features and interact with infected devices automatically.
What This Means for Defenders
AI-assisted exploit development changes the economics of attacks. Finding zero-days has traditionally required deep expertise and significant time. AI lowers both barriers.
Semantic logic bugs are particularly concerning. These flaws hide in legitimate business logic rather than obvious code errors. They are hard to catch with automated scanning tools but straightforward for an AI that understands context.
Google notified the unnamed software developer in time to prevent mass exploitation. The incident shows that disclosure coordination remains critical. Vendors need to respond fast when AI can compress the timeline from vulnerability discovery to working exploit.
Logicity's Take
Frequently Asked Questions
Which AI model was used to create the zero-day exploit?
Google has not identified the specific AI model. The researchers ruled out Gemini but the actual LLM used remains unknown.
What software was targeted by the AI-generated exploit?
Google has not named the affected software. It is described as a popular open-source web-based system administration tool.
Was anyone harmed by this attack?
No. Google says the attack was stopped before reaching mass exploitation. The software developer was notified in time to take action.
How did Google know the exploit was AI-generated?
The Python code contained educational docstrings, a hallucinated CVSS score, and textbook formatting patterns typical of LLM output. The vulnerability type was also characteristic of AI-discovered flaws.
Are other nation-states using AI for hacking?
Yes. Google's report documents Chinese groups APT27 and UNC5673, North Korean groups APT45 and UNC2814, and Russian actors all using AI for various attack phases.
Another recent major security incident targeting developers
Related coverage on AI safety and misuse concerns
Need Help Implementing This?
Source: BleepingComputer
Huma Shazia
Senior AI & Tech Writer
Related Articles
Browse all
Kraken Crypto Exchange Extortion: Hackers Threaten to Leak Internal Videos After Insider Breach
Cryptocurrency exchange Kraken is being extorted by hackers who obtained videos of internal systems through bribed support employees. The company says no funds were compromised and refuses to pay, with only about 2,000 accounts affected. Kraken is working with federal law enforcement to prosecute everyone involved.

Windows 11 KB5083769 and KB5082052: April 2026 Patch Tuesday Brings Smart App Control Changes and Security Fixes
Microsoft's April 2026 Patch Tuesday updates are now live for Windows 11, bringing critical security patches alongside a welcome change to Smart App Control. You can finally toggle SAC on or off without wiping your entire system. The updates cover versions 23H2, 24H2, and 25H2.

Zero Trust Identity Security: 5 Ways This Framework Actually Stops Credential Theft
Stolen credentials caused 22% of breaches in 2025, making them the top attack vector. Zero Trust promises to fix this, but only when it's built around identity as the core principle. Here's how organizations can implement it properly.
Open Source PR Backlogs: Why Your GitHub Contribution Sits Unreviewed for a Year
A developer's Jellyfin pull request has been waiting over a year for merge despite two approvals, exposing a systemic crisis in open source maintenance. Queuing theory explains why backlogs grow exponentially, and 60% of maintainers have quit or considered quitting due to burnout.
Also Read

Arm's $2B in CPU Orders Still Won't Crack 5% Server Market Share
Arm has secured over $2 billion in commitments for its new AGI CPU, doubling internal expectations in just six weeks. But Mercury Research says even that figure translates to low single-digit market share against AMD and Intel's entrenched server dominance.

Claude Design Makes Better Slide Decks Than Google or PowerPoint
Anthropic's Claude Design feature includes a slide deck mode that outperforms competitors by creating cohesive presentations with consistent branding. The secret is Design Systems, which lets users upload existing assets to generate a unified visual style across all slides.

3 Free Apps That Turn Your Goals Into a Game
Struggling to stick to your daily routines? A tech journalist shares three open-source apps that helped him build a gamified productivity system. The approach creates structure and accountability without the corporate overhead.