Someone Just Firebombed Sam Altman's House—And Then Tried to Burn Down OpenAI HQ

In a shocking escalation of anti-AI sentiment, a 20-year-old man was arrested after allegedly throwing a Molotov cocktail at OpenAI CEO Sam Altman's $65 million San Francisco mansion, then traveling across the city to threaten to torch the company's headquarters. The attack comes amid mounting criticism of Altman's leadership following a devastating New Yorker investigation that painted him as a manipulative leader.
Key Takeaways
- A suspect threw an incendiary device at Altman's Russian Hill home around 3:40 AM, setting the exterior gate ablaze before security extinguished it
- The same individual allegedly traveled to OpenAI headquarters and threatened to burn the building down, leading to his arrest
- The attack follows a scathing New Yorker investigation alleging a pattern of deception by Altman over two decades
- Daniel Alejandro Moreno-Gama, 20, faces charges including attempted murder and arson
- Both the FBI and SFPD are investigating the incident as tensions around AI leadership intensify
In This Article
- A Pre-Dawn Attack That Shocked Silicon Valley
- From Firebombing to Threatening OpenAI's Offices
- OpenAI Breaks Silence as Altman Makes Rare Personal Appeal
- The Perfect Storm: Why Tensions Are Running High
- The Safety Debate That Won't Go Away
- A Watershed Moment for AI Industry Security?
A Pre-Dawn Attack That Shocked Silicon Valley
It was still dark in San Francisco's upscale Russian Hill neighborhood when things went terribly wrong. Around 3:40 AM on Friday, April 10th, someone approached the metal gate of a 5,400-square-foot mansion on Chestnut Street—a property that OpenAI's chief executive had purchased for a reported $65 million in early 2025—and hurled a homemade firebomb at it.
- The improvised incendiary device ignited the exterior gate, but security personnel on the property acted quickly to extinguish the flames before they could spread to the main structure
- No one inside the home was injured during the incident, though the psychological impact on Altman's family—including his husband and young son—remains unclear
- The suspect fled on foot after the attack, but his night of alleged destruction was far from over
View our latest statement regarding an incident that occurred early this morning at a North Beach residence. Officers have made an arrest, and no injuries were reported as a result of this incident. pic.twitter.com/t4DsF31uxh
— San Francisco Police (@SFPD) April 10, 2026
From Firebombing to Threatening OpenAI's Offices
Here's where this story takes an even stranger turn. Less than an hour after the attack on Altman's residence, San Francisco police received another alarming call—this time from OpenAI's headquarters on Third Street in the Mission Bay district. Someone was outside threatening to burn the building down.
- When officers arrived at the scene, they quickly realized they were dealing with the same individual captured on surveillance footage at Altman's home earlier that morning
- The suspect, identified as Daniel Alejandro Moreno-Gama, was immediately detained and later booked on charges including attempted murder, arson, and possession of an incendiary device
- SFPD's Special Investigations and Arson Units are leading the case, while the FBI has confirmed it's monitoring the situation and coordinating with local authorities
OpenAI Breaks Silence as Altman Makes Rare Personal Appeal
Both OpenAI and its CEO moved quickly to address the incident publicly. In a statement provided to reporters, the company expressed gratitude for law enforcement's swift action while confirming the basic details of what had transpired.
- OpenAI praised the SFPD's rapid response and emphasized that no employees were harmed during the threatening incident at their headquarters
- In an unusually personal move, Altman published a blog post featuring a photograph of his family, writing that he hoped sharing such a private moment might make future attackers think twice
- The San Francisco District Attorney's Office indicated that decisions about whether to pursue the case at the local or federal level could come as early as next week
The Perfect Storm: Why Tensions Are Running High
This attack didn't happen in a vacuum. It comes at a moment when criticism of both Altman personally and AI technology broadly has reached a fever pitch. Understanding this context is crucial to grasping why someone might feel driven to such extreme action—even if that action remains completely unjustifiable.
- A recent investigation by The New Yorker painted a damning portrait of Altman's leadership style, drawing on over 100 interviews and internal documents to allege a two-decade pattern of manipulation
- The report cited internal memos compiled by former co-founder Ilya Sutskever, documents that reportedly describe systematic deception within OpenAI's leadership
- Former head of research Dario Amodei, who left to found competing AI company Anthropic, was quoted saying that the core problem at OpenAI is Altman himself
- Public anxiety about AI's impact on employment has grown substantially, fueled partly by Altman's own frank warnings about how the technology could disrupt labor markets
The Safety Debate That Won't Go Away
Beyond personality clashes and leadership allegations, there's a deeper current of concern running through the AI community: whether OpenAI is moving too fast and prioritizing flashy products over genuine safety research.
- Former team lead Jan Leike departed the company with a pointed message that safety culture had taken a backseat to commercial considerations
- Reports indicate that OpenAI's superalignment team received only one to two percent of the computing resources it was promised, far below the twenty percent initially committed
- Elon Musk, an original OpenAI co-founder who has become one of its fiercest critics, recently amplified claims that Altman cannot be trusted with powerful AI systems
- As AI tools become more embedded in daily life—from customer service to creative industries—public backlash against the technology's encroachment has intensified
A Watershed Moment for AI Industry Security?
This incident may force tech leaders to reckon with an uncomfortable reality: as AI reshapes society, the people building these systems could become targets for those who feel threatened by the changes underway.
- The attack raises serious questions about security protocols for high-profile executives in an industry that has historically prided itself on accessibility
- Prosecutors must decide whether to treat this as a local crime or escalate to federal charges given the apparent targeting of critical technology infrastructure
- The incident could accelerate calls for better protection of AI company facilities and personnel as the technology becomes increasingly contentious
- It also underscores the urgent need for more constructive channels through which concerns about AI development can be voiced and addressed
“Early this morning, someone threw a Molotov cocktail at Sam Altman's home and also made threats at our San Francisco headquarters. Thankfully, no one was hurt.”
— OpenAI spokesperson, statement to Wired
“We deeply appreciate how quickly SFPD responded and the support from the city in helping keep our employees safe. The individual is in custody, and we're assisting law enforcement with their investigation.”
— OpenAI spokesperson
“Here is a photo of my family. I love them more than anything. Normally we try to be pretty private, but in this case I am sharing a photo in the hopes that it might dissuade the next person from throwing a Molotov cocktail at our house, no matter what they think about me.”
— Sam Altman, personal blog post
“Safety culture and processes have taken a backseat to shiny products.”
— Jan Leike, former OpenAI superalignment team lead
Final Thoughts
Whatever one thinks about Sam Altman, OpenAI, or the trajectory of artificial intelligence, violence is never the answer. This incident serves as a stark reminder that the debates surrounding AI are no longer abstract academic discussions—they're touching raw nerves in ways that demand serious attention. As the investigation continues and prosecutors weigh their options, the tech industry must grapple with how to maintain the openness that has defined Silicon Valley while protecting the people working to shape our technological future. The conversation about AI's role in society needs to happen, but it needs to happen through dialogue, not destruction.
Sources & Credits
Originally reported by Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics
Huma Shazia
Senior AI & Tech Writer
Related Articles
Browse all
AI Apocalypse: The Dark Side of Artificial Intelligence Exposed

The 'V' Word: Why Moderna is Ditching 'Vaccine' for its Cancer Breakthrough

The Clock is Ticking: Unlock the Secrets of TechCrunch Disrupt 2026 Before It's Too Late

Humanity Just Went Farther Into Space Than Ever Before — And Made It Back Alive
Also Read

Tokyo is About to Become the Global Tech Hub You Never Saw Coming

Your Favorite PC Monitoring Tools Were Secretly Spreading Malware — And You Might Have Downloaded It
