← April 29, 2026
tech ethics

OpenAI's Safety Team Flagged the Tumbler Ridge Shooter. Leadership Vetoed the Warning.

OpenAI's Safety Team Flagged the Tumbler Ridge Shooter. Leadership Vetoed the Warning.
BBC News

What happened

Seven families of victims from the February 2026 Tumbler Ridge, British Columbia mass shooting -- which killed eight people including six children -- filed lawsuits in California on April 29 against OpenAI and CEO Sam Altman. The shooter, 18-year-old Jessie Van Rootselaar, had ChatGPT conversations describing gun violence scenarios that were flagged by a 12-person OpenAI safety team. That team recommended alerting the Royal Canadian Mounted Police. According to the lawsuits, executive leadership at OpenAI vetoed the decision to protect the company's $850 billion valuation. The suspect used a new account to continue planning the attack after his first account was flagged.

OpenAI's safety team did its job. OpenAI's leadership didn't. The company will now spend more defending that decision in court than it would have cost to make a phone call to Canadian police.

The Hidden Bet

1

The legal theory here is novel and uncertain

The lawsuits use a well-established tort: negligence plus actual knowledge. OpenAI's own safety team flagged the threat and recommended reporting it. If the lawsuits' factual claims are accurate -- that leadership knew and vetoed -- this is not a hard case for a jury. The interesting question is not whether OpenAI is liable, but how much.

2

This is primarily about ChatGPT's outputs being dangerous

The more damning allegation is not what ChatGPT said to the shooter, but what OpenAI's internal team concluded and what leadership chose to do with that conclusion. The lawsuits are about institutional decision-making, not AI outputs. That makes them harder to defend.

3

OpenAI's policy changes since the shooting are sufficient mitigation

Sam Altman's public apology and policy announcements happened after the lawsuits were imminent. Lawyers for the families will argue the post-hoc response is evidence that OpenAI knew the prior policy was indefensible, not evidence that the company took responsibility.

The Real Disagreement

The genuine tension here is between a coherent duty: AI companies that observe credible threats of violence have an obligation to report them, versus a coherent fear: mandatory reporting obligations would require AI companies to surveill every user conversation, building the infrastructure for a surveillance state while flagging thousands of false positives for every real threat. Both are real. The court is probably going to rule on the specific facts, not the general principle, which means OpenAI will likely lose on the individual case while the broader obligation remains legally undefined. That is the worst possible outcome for everyone except lawyers.

What No One Is Saying

OpenAI claims it 'revokes access' from banned users and takes steps to prevent new accounts. But the lawsuit alleges the suspect created a new account under the same name after being banned. If that is true, OpenAI's account verification is so weak that its bans are theatrical. That detail deserves more attention than it is getting.

Who Pays

12-year-old Maya Gebala and her family

Ongoing

Shot three times in the head, neck, and cheek; still hospitalized; lawyers seeking over $1 billion in damages but no damages replace what was taken

OpenAI employees who pushed to report the threat

Over the next 2-3 years of litigation

They did the right thing, were overruled by leadership, and will now be deposed in lawsuits while the company's legal team works to limit their testimony

AI companies broadly

If and when a verdict is reached, likely 2028-2030

If OpenAI loses, every AI company with user monitoring capabilities faces potential liability whenever a flagged user later commits violence. The insurance and compliance cost of that obligation is enormous

Scenarios

Settlement Before Discovery

OpenAI settles with families under NDA before internal documents and depositions become public. Altman issues another apology. Policy changes are announced. The internal veto decision remains unventilated.

Signal Watch for OpenAI making settlement overtures within 6-8 months; Altman's public statements becoming notably more cautious

Discovery Changes Everything

Lawsuits proceed to discovery. Internal communications, the identity of the executives who vetoed the safety team's recommendation, and the explicit language of that decision become public record. Congressional hearings follow. Legislative mandatory-reporting obligations for AI companies advance.

Signal Edelson's firm filing motions to compel production of internal Slack messages and executive communications about the Tumbler Ridge flag

Legal Theory Fails

Courts rule that AI companies do not have a duty to report threats observed in user conversations, citing speech concerns and privacy law. OpenAI wins. The decision creates a legal shield for the industry. Congress responds with legislation.

Signal Federal judge grants a motion to dismiss on the duty question before trial

What Would Change This

If a court ruled that AI companies monitoring user activity for safety purposes creates a legal duty to report, and that OpenAI's monitoring system meets that standard, the bottom line becomes even sharper. The case would move from 'they probably should have' to 'they were legally required to.' The company has not addressed whether its monitoring system is pervasive enough to trigger such a duty.

Related