OpenAI didn't contact police despite employees flagging mass shooter's concerning chatbot interactions: report

Months before a mass school shooting in British Columbia, OpenAI employees raised an alarm about the shooter's violent chatbot prompts and debated calling police.

A new report from The Wall Street Journal revealed that employees at OpenAI, the artificial intelligence company known for creating ChatGPT, raised alarm about transgender Canadian mass shooter Jesse Van Rootselaar's interactions with its chatbot but did not alert authorities. 

Around a dozen employees reportedly were aware of the concerning interactions months before Van Rootselaar killed multiple family members and school-aged kids in Tumbler Ridge, British Columbia. 

The interactions, first flagged by an automated review system, included violent scenarios involving gun violence over the course of multiple days, people familiar with the matter indicated to The Wall Street Journal. 

OpenAI's policy is only to alert law enforcement if there is an imminent threat of real-world harm or violence, and some of the employees reportedly wanted to go to the police. But, in the end, the company opted not to contact authorities.

AI COMPANIONS ARE RESHAPING TEEN EMOTIONAL BONDS 

On Feb. 10, Van Rootselaar, 18, gunned down his mother and stepbrother at their home in British Columbia before heading to Tumbler Ridge Secondary School, where the teen shot and killed five students and a teacher before turning the gun on himself. Twenty-five others were reportedly injured.

Authorities later revealed Van Rootselaar, who had dropped out of the school he attacked, was a biological male who had been identifying as female since he was 6.

Police were aware of Van Rootselaar's mental health struggles, and they had reportedly made visits to his house on multiple occasions in the past due to various incidents.

FAMILY SPEAKS OF 'PROFOUND PAIN' AFTER TRANS DAD GUNS DOWN EX-WIFE, SON AT HIGH SCHOOL HOCKEY GAME

The teen killer was found to have had an obsession with death, being an avid poster on a website that hosts videos of people being murdered, according to the New York Post. Van Rootselaar's social media footprint included images of him with guns and content about hallucinogenic drugs. Van Rootselaar's mother expressed alarm at his actions in a Facebook parents group in 2015, the New York Post also reported.  

A spokesperson for the company told Fox News Digital the company banned Van Rootselaar's account in June 2025 for violating its usage policies but determined the activity did not rise to the level that it needed to be brought to the attention of law enforcement. The spokesperson noted the company is compelled to weigh privacy concerns, adding that being too liberal with police referrals can create unintended harm.

OpenAI's chatbot model is made to discourage real-world harm when it senses dangerous situations, Fox News Digital was told.    

The company reached out to the Royal Canadian Mounted Police after the incident and is supporting the investigation with information on Van Rootselaar's chatbot activity, the spokesperson indicated. 

"Our thoughts are with everyone affected by the Tumbler Ridge tragedy," the company said in a statement. "We proactively reached out to the Royal Canadian Mounted Police with information on the individual and their use of ChatGPT, and we’ll continue to support their investigation."

The post OpenAI didn't contact police despite employees flagging mass shooter's concerning chatbot interactions: report appeared first on FOX News