All posts
Trending Tech

Judge Rules DOGE Used ChatGPT Illegally to Cancel $100M in Grants

Huma Shazia9 May 2026 at 12:13 am4 min read
Judge Rules DOGE Used ChatGPT Illegally to Cancel $100M in Grants

Key Takeaways

Judge Rules DOGE Used ChatGPT Illegally to Cancel $100M in Grants
Source:
  • A federal judge ruled DOGE's grant cancellations unconstitutional, restoring over $100 million in humanities funding
  • DOGE staffers used ChatGPT to scan grants for DEI-related terms without defining what DEI meant to the AI
  • Staffers created 'Detection Codes' targeting protected characteristics including race, national origin, and sexuality

A federal judge has ruled that the Department of Government Efficiency acted unconstitutionally when it used ChatGPT to identify and cancel more than $100 million in federal grants. The 143-page decision, issued Thursday by US District Judge Colleen McMahon, found that DOGE staff used the AI chatbot to target grants based on protected characteristics like race, national origin, religion, and sexuality.

The ruling restores funding for grants from the National Endowment for the Humanities that were shut down under the pretense of eliminating diversity, equity, and inclusion programs.

97%
Percentage of NEH grants eliminated by two DOGE staffers using ChatGPT and keyword searches

How DOGE Used ChatGPT to Flag Grants

The court filing details testimony from Justin Fox, a DOGE staffer who worked with colleague Nate Cavanaugh to eliminate 97 percent of grants under the NEH. Fox testified that he submitted each grant description to ChatGPT using a standardized prompt.

Does the following relate at all to DEI? Respond factually in less than 120 characters. Begin with 'Yes.' or 'No.' followed by a brief explanation.

— Justin Fox's ChatGPT prompt, as cited in court filing

Fox admitted under oath that he did not define "DEI" for ChatGPT. He also testified that he had no idea how the AI system understood the term. The chatbot was essentially making consequential funding decisions based on its own interpretation of a politically charged acronym.

The 'Detection Codes' System

Beyond the ChatGPT prompts, Fox created what he called "Detection Codes" to identify grants he labeled "Craziest Grants" and "Other Bad Grants." These search terms targeted specific demographic groups.

  • BIPOC (Black, Indigenous, People of Color)
  • Minorities
  • Native, Tribal, Indigenous
  • Immigrant
  • LGBTQ, Homosexual, Gay

When asked whether he ran this list of words through every grant description from NEH, Fox confirmed "yes." Judge McMahon wrote that Fox "constructed and applied explicit classifications based on protected characteristics and used them as the operative criteria for revoking federal grants."

STKS486_DOGE_DEPARTMENT_2_E
DOGE's grant review process relied heavily on AI scanning for DEI-related terms

The Constitutional Problem

Judge McMahon's ruling found that using protected characteristics as criteria for federal funding decisions violates constitutional equal protection principles. The court wrote that "it could not be more obvious that DOGE used the mere presence of particular, protected characteristics to disqualify grants from continued funding."

The lawsuit was originally filed in 2025 by humanities groups affected by the grant cancellations. Thursday's ruling represents a significant legal setback for DOGE's approach to cutting federal spending through AI-assisted reviews.

What This Means for AI in Government

The case highlights the risks of deploying AI tools for high-stakes government decisions without proper oversight or methodology. Fox's testimony reveals a process with no clear definitions, no human review of ChatGPT's reasoning, and no consideration of whether the AI's outputs were legally sound.

Using a consumer chatbot to make $100 million funding decisions, while admitting you don't know how the AI interprets key terms, is now documented in a federal court ruling as both ineffective and unconstitutional.

ℹ️

Logicity's Take

Also Read
Poland Water Plant Hacks Highlight U.S. Infrastructure Risk

Another case of technology governance failures in critical systems

Also Read
Big Tech Q1 2026 Earnings: AI Spending Hits $725B

Context on the scale of AI investment versus governance maturity

Frequently Asked Questions

What grants did DOGE cancel using ChatGPT?

DOGE canceled over $100 million in grants from the National Endowment for the Humanities, eliminating 97% of the agency's grants based on ChatGPT scans for DEI-related content.

Why was DOGE's use of ChatGPT ruled unconstitutional?

The court found that DOGE used protected characteristics like race, national origin, religion, and sexuality as criteria for canceling grants, which violates constitutional equal protection principles.

What were the 'Detection Codes' DOGE used?

DOGE staffer Justin Fox created search terms targeting demographic groups including BIPOC, Minorities, Native, Tribal, Indigenous, Immigrant, LGBTQ, Homosexual, and Gay to flag grants for elimination.

Can government agencies use AI for funding decisions?

This ruling suggests agencies must ensure AI tools don't make decisions based on protected characteristics. Using undefined prompts in consumer chatbots without human review creates significant legal risk.

ℹ️

Need Help Implementing This?

H

Huma Shazia

Senior AI & Tech Writer

Related Articles

Tesla's Remote Parking Feature: The Investigation That Didn't Quite Park Itself
Trending Tech·8 min

Tesla's Remote Parking Feature: The Investigation That Didn't Quite Park Itself

The US auto safety regulators have closed their investigation into Tesla's remote parking feature, but what does this mean for the future of autonomous driving? We dive into the details of the investigation and what it reveals about the technology. The National Highway Traffic Safety Administration found that crashes were rare and minor, but the investigation's closure doesn't necessarily mean the feature is completely safe.