All posts
AI & Machine Learning

AI Builds Working Exploits in 30 Minutes, Killing 90-Day Patch Window

Huma Shazia11 May 2026 at 7:48 pm5 min read
AI Builds Working Exploits in 30 Minutes, Killing 90-Day Patch Window

Key Takeaways

AI Builds Working Exploits in 30 Minutes, Killing 90-Day Patch Window
Source: The Decoder
  • AI tools can reverse-engineer security patches into working exploits in 30 minutes
  • One vulnerability was reported by 11 different researchers in six weeks, suggesting AI-driven parallel discovery
  • The traditional 90-day disclosure window rests on four assumptions that AI has invalidated

The security industry's 90-day vulnerability disclosure window is based on a simple premise: give vendors time to fix bugs before attackers find them. A veteran researcher says that premise is dead.

Himanshu Anand, a Firewall Security Analyst at Cloudflare and former Symantec engineer, published a detailed analysis showing how AI language models have broken every assumption behind coordinated disclosure. His team, Water Paddlers, was a three-time consecutive finalist at the DEF CON hacking competition.

The problem isn't theoretical. Anand walked through three real-world cases where AI tools collapsed timelines that used to protect defenders.

The Four Assumptions That No Longer Hold

Google's Project Zero popularized the 90-day disclosure window. The model rests on four assumptions that Anand says AI has invalidated.

  1. The person who found the bug is probably the only one who spotted it.
  2. Even if other researchers discover the same flaw, they will take their own time to do so.
  3. The vendor has a comfortable head start on writing the patch.
  4. After a patch ships, attackers still need days or weeks to reverse-engineer a working exploit.

Each assumption gave defenders breathing room. AI has compressed that room to nearly zero.

Eleven Reporters, One Bug, Six Weeks

In April, Anand reported a critical flaw in an online store that let anyone complete purchases for zero dollars. The vendor's response was unexpected: he was the eleventh person to report the same bug in six weeks.

A triage staffer described the pattern to Anand: once someone discovers a flaw using an AI tool, waves of nearly identical reports roll in within days.

Anand's question cuts to the heart of the problem: if ten honest researchers find the same flaw, how many find it and stay quiet?

That one example kills the first two assumptions. Vulnerabilities are not exclusive discoveries anymore. Parallel finders do not need extra time when they are all using the same AI tools.

30 minutes
Time for Anand to build a working exploit from a React security patch using an AI language model

From Patch to Exploit in 30 Minutes

Anand's second example is more alarming. React, the widely used web framework, released several security patches. Anand downloaded the source code diff and used a language model to help him build a working exploit.

It took 30 minutes.

Experienced reverse engineers used to need days for the same task. Sometimes weeks. That gap between patch release and exploit availability was supposed to give system administrators time to update their systems.

That gap no longer exists.

Also Read
Google: Hackers Used AI to Build First Zero-Day Exploit

Related coverage of AI-assisted exploit development

What Anand Recommends

Anand does not claim to have all the answers, but he offers three recommendations for different players in the security ecosystem.

  • Vendors should treat critical bugs as immediate emergencies, not 90-day projects.
  • Security researchers should shorten disclosure timelines to match the new reality.
  • System administrators should deploy patches instantly, not on scheduled maintenance windows.

The common thread: speed. Every actor in the chain needs to move faster because attackers now can.

The Broader Implications

The 90-day disclosure window was a negotiated truce between security researchers and vendors. Researchers agreed to give vendors time to patch. Vendors agreed to actually patch instead of ignoring reports.

That truce assumed a world where time was on the defender's side. AI has shifted that balance. When multiple researchers find the same bug within weeks, and when patches can be weaponized in minutes, the 90-day window becomes a liability.

Anand is not calling for immediate public disclosure of all vulnerabilities. He is arguing that the industry needs to rethink timelines that were designed for a pre-AI world.

If ten honest researchers find the same flaw, how many find it and stay quiet?

— Himanshu Anand, Cloudflare Firewall Security Analyst

The question answers itself. In a world where AI tools democratize vulnerability discovery, the old assumption that bugs stay hidden until formal disclosure is wishful thinking.

ℹ️

Logicity's Take

Frequently Asked Questions

Why is the 90-day vulnerability disclosure window under threat?

AI tools allow multiple researchers to find the same vulnerabilities almost simultaneously and enable attackers to reverse-engineer patches into exploits in minutes instead of days.

How fast can AI create a working exploit from a security patch?

In one documented case, security researcher Himanshu Anand built a working exploit from a React security patch in 30 minutes using an AI language model.

What should companies do to protect against AI-accelerated exploits?

Vendors should treat critical bugs as emergencies, security researchers should shorten disclosure timelines, and system administrators should deploy patches immediately rather than waiting for scheduled maintenance windows.

Who is Himanshu Anand?

Anand is a Firewall Security Analyst at Cloudflare, former Symantec engineer, and member of Water Paddlers, a team that was a three-time consecutive finalist at the DEF CON hacking competition.

ℹ️

Need Help Implementing This?

Source: The Decoder / Maximilian Schreiner

H

Huma Shazia

Senior AI & Tech Writer