All posts
Gadgets & Hardware

Trump Admin Weighs Mandatory AI Model Vetting Before Release

Manaal Khan5 May 2026 at 3:33 am5 min read
Trump Admin Weighs Mandatory AI Model Vetting Before Release

Key Takeaways

Trump Admin Weighs Mandatory AI Model Vetting Before Release
Source: Latest from Tom's Hardware
  • The proposed executive order would create an AI working group of tech executives and government officials to develop oversight procedures
  • Anthropic's Mythos model, described as too dangerous for public release, appears to have triggered the policy reversal
  • The system would grant government early access to models without blocking their release

A Sharp Reversal on AI Regulation

The Trump administration is considering an executive order that would establish government review of new AI models before they reach the public, according to a New York Times report citing unnamed U.S. officials. The proposed order would create an "AI working group" composed of tech executives and government officials to develop oversight procedures.

White House staff briefed leaders from Anthropic, Google, and OpenAI on the plans last week. If enacted, the policy would mark a dramatic shift from the administration's earlier position. On his first day back in office, Trump revoked a Biden-era executive order that addressed AI risks.

The timing isn't coincidental. David Sacks, who led the administration's AI deregulation push as AI czar, left the role in March. Since then, White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent have taken more active roles in shaping AI policy.

How the Review System Would Work

The proposed approach resembles the UK's AI Security Institute model, where government bodies evaluate frontier models against safety benchmarks before and after deployment. According to officials who spoke with The New York Times, three agencies could oversee the review process: the NSA, the Office of the National Cyber Director, and the Director of National Intelligence.

One important detail: the system would grant the government early access to models without blocking their release. This isn't a gatekeeper model where companies need approval before launch. It's a preview system that gives federal agencies time to assess risks.

ℹ️

Logicity's Take

Anthropic's Mythos: The Catalyst

The catalyst for this policy reversal appears to be Anthropic's Mythos model. The company's marketing described Mythos as capable of finding thousands of critical software vulnerabilities, and Anthropic deemed it too dangerous for public release. That claim drew immediate government attention.

The Mythos situation comes at a tense moment in Anthropic's relationship with the federal government. The Pentagon designated Anthropic a supply chain risk after the company refused to remove guardrails on autonomous weapons and mass surveillance applications. A federal judge later called that designation "Orwellian."

The dispute stems from a collapsed $200 million Pentagon contract. Despite that conflict, the NSA has already used Mythos to assess vulnerabilities in government Microsoft software deployments. Other agencies remain cut off from Anthropic's tools.

Microsoft data center in Mount Pleasant, Wisconsin
Microsoft data center in Mount Pleasant, Wisconsin. The NSA has used Anthropic's Mythos to assess vulnerabilities in government Microsoft software.

The Bigger Picture for AI Companies

For OpenAI, Google, and Anthropic, pre-release government review creates a new compliance layer. Companies would need processes for sharing unreleased models with federal reviewers. They'd need legal frameworks for handling government feedback. And they'd need to decide how to respond if an agency raises objections.

The proposal also raises questions about smaller AI labs. Would the review requirement apply to all models, or only those above certain capability thresholds? The UK's approach focuses on "frontier" models, those at the cutting edge of capability. A similar scope limitation would spare most AI companies while targeting the handful building the most powerful systems.

Some analysts have questioned whether Mythos's capabilities match Anthropic's marketing claims. That skepticism matters here. If a company's self-assessment of danger triggers mandatory review, there's an incentive to downplay capabilities. The government would need independent evaluation methods, not just company-provided documentation.

What Happens Next

The discussions are ongoing, and no executive order has been signed. The AI working group concept suggests the administration wants industry input before finalizing requirements. But the direction is clear: a government that six months ago revoked AI safety rules is now building a pre-release review system.

The shift reflects a broader pattern. Frontier AI capabilities are advancing faster than anyone predicted. Models that can find software vulnerabilities, generate persuasive content, or assist with scientific research also create risks that cross into national security territory. The question was never whether government oversight would come, but what form it would take.

Frequently Asked Questions

Would the government be able to block AI model releases?

No. According to the current discussions, the system would grant government early access to models without blocking their release. This is a preview system, not an approval process.

Which agencies would review AI models?

Officials told The New York Times that the NSA, the Office of the National Cyber Director, and the Director of National Intelligence could oversee the review process.

Why did the Trump administration change its position on AI regulation?

The shift appears linked to Anthropic's Mythos model, which the company described as capable of finding thousands of software vulnerabilities and too dangerous for public release. Leadership changes also played a role, with AI czar David Sacks departing in March.

What is Anthropic's Mythos model?

Mythos is an AI model that Anthropic described as capable of finding thousands of critical software vulnerabilities. The company deemed it too dangerous for public release. The NSA has already used it to assess vulnerabilities in government Microsoft software deployments.

How does this compare to AI regulation in other countries?

The proposed approach resembles the UK's AI Security Institute model, where government bodies evaluate frontier models against safety benchmarks before and after deployment.

ℹ️

Need Help Implementing This?

Source: Latest from Tom's Hardware

M

Manaal Khan

Tech & Innovation Writer

Related Articles