OpenAI Ignored Safety Team Warnings Before Canada Shooting

Key Takeaways

- OpenAI's safety team flagged a ChatGPT account as a credible gun violence threat eight months before a mass shooting
- Leadership overruled the recommendation to notify police, citing user privacy concerns
- Seven families have filed lawsuits in California alleging OpenAI hid violent users to protect its IPO timeline
OpenAI's internal safety team identified a ChatGPT user as a credible gun violence threat more than eight months before that user carried out one of Canada's deadliest school shootings. Company leadership overruled the team's recommendation to alert police. Seven families are now suing.
The lawsuits, filed Wednesday in a California court, allege that OpenAI chose to protect user privacy rather than notify law enforcement about the flagged account. Police already had a file on the individual and had previously removed guns from their home.
What the Safety Team Found
According to whistleblowers who spoke to The Wall Street Journal, OpenAI's trained safety experts flagged the ChatGPT account as posing a credible threat of real-world gun violence. The company's established protocol in such cases calls for notifying police.
That notification never happened. OpenAI leadership decided that the user's privacy and the potential stress of a police encounter outweighed the violence risk, the whistleblowers said.
Instead of alerting authorities, OpenAI simply deactivated the account. The company then followed up with instructions on how to create a new account using a different email address, the lawsuits allege. The shooter continued planning.
“I am deeply sorry that we did not alert law enforcement to the account that was banned in June. While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.”
— Sam Altman, OpenAI CEO, in a public apology to Tumbler Ridge
The Lawsuits
Attorney Jay Edelson leads a cross-border legal team representing families from Tumbler Ridge, a rural mining town of about 2,000 people. Six of the lawsuits come from families of victims killed in the shooting. The seventh is from a mother whose daughter remains in intensive care.
All cases are being filed in California rather than Canada. Edelson told Ars Technica that the families want OpenAI held accountable on its home turf by a jury of its peers. The California filing supersedes an earlier Canadian lawsuit, where OpenAI was expected to contest jurisdiction.
Edelson called Altman's apology "ridiculous," saying it came too late and promised too little. He suggested OpenAI's legal strategy aims to delay litigation over ChatGPT-linked deaths until after the company's planned IPO this year.

The Company's Response
Altman has acknowledged that not alerting law enforcement was a mistake while maintaining that the account was "banned." In his apology to the Tumbler Ridge community, he promised that OpenAI will "find ways to prevent tragedies like this in the future" and continue "working with all levels of government to help ensure something like this never happens again."
The families and their attorneys dispute that characterization. According to the lawsuits, OpenAI has been hiding violent ChatGPT users for months to protect Altman from public scrutiny ahead of the company's IPO.
What Happens Next
These seven lawsuits are "the first of many to come from the small town," according to Edelson. The litigation will test how courts evaluate AI companies' responsibility when their safety teams identify credible threats and leadership chooses not to act.
The cases raise questions about the protocols AI companies follow when users exhibit dangerous behavior, the liability they face when internal warnings go unheeded, and the tension between user privacy and public safety.
More on how ChatGPT is changing workplace dynamics
Logicity's Take
Frequently Asked Questions
When did OpenAI's safety team flag the shooter's account?
More than eight months before the shooting, in June. The account was deactivated but law enforcement was not notified.
Why didn't OpenAI notify police about the flagged account?
According to whistleblowers, leadership decided that user privacy and the potential stress of a police encounter outweighed the violence risk.
Where are the lawsuits being filed?
All seven lawsuits are being filed in California, OpenAI's home state, rather than in Canada where the shooting occurred.
Has Sam Altman apologized for the company's handling of the case?
Yes. Altman issued a public apology to the Tumbler Ridge community acknowledging that not alerting law enforcement was a mistake.
How many families are suing OpenAI?
Seven families have filed lawsuits so far. Six represent victims killed in the shooting; one represents a victim still in intensive care.
Need Help Implementing This?
Source: Ars Technica
Manaal Khan
Tech & Innovation Writer
اقرأ أيضاً

رأي مغاير: كيف يؤثر اختراق الأمن الداخلي الأميركي على شركاتنا الخاصة؟
في ظل اختراق عقود الأمن الداخلي الأميركي مع شركات خاصة، نناقش تأثير هذا الاختراق على مستقبل الأمن السيبراني. نستعرض الإحصاءات الموثوقة ونناقش كيف يمكن للشركات الخاصة أن تتعامل مع هذا التهديد. استمتع بقراءة هذا التحليل العميق

الإنسان في زمن ما بعد الوجود البشري: نحو نظام للتعايش بين الإنسان والروبوت - Centre for Arab Unity Studies
في هذا المقال، سنناقش كيف يمكن للبشر والروبوتات التعايش في نظام متكامل. سنستعرض التحديات والحلول المحتملة التي تضعها شركات مثل جوجل وأمازون. كما سنلقي نظرة على التوقعات المستقبلية وفقًا لتقرير ماكنزي

إطلاق ناسا لمهمة مأهولة إلى القمر: خطوة تاريخية نحو استكشاف الفضاء
تعتبر المهمة الجديدة خطوة هامة نحو استكشاف الفضاء وتطوير التكنولوجيا. سوف تشمل المهمة إرسال رواد فضاء إلى سطح القمر لconducting تجارب علمية. ستسهم هذه المهمة في تطوير فهمنا للفضاء وتحسين التكنولوجيا المستخدمة في استكشاف الفضاء.