Top Stories

Policy shake-up: OpenAI to flag violent chats; law enforcement may be alerted

Your ChatGPT conversations may not be private anymore; Police could be reading your chats

OpenAI has quietly begun scanning ChatGPT conversations for threatening content and reporting users to law enforcement when human reviewers determine there’s an imminent risk of violence against others. The AI company disclosed this policy change in a blog post following tragic cases of users experiencing mental health crises while using the chatbot.The monitoring system routes flagged conversations to a specialized team trained on OpenAI’s usage policies, who can ban accounts or contact police if they detect serious threats. “If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement,” OpenAI stated.

AI safety concerns rise after murder-suicide case

The policy change came days before news broke of a Connecticut man who killed his mother and himself after ChatGPT allegedly fueled his paranoid delusions. Stein-Erik Soelberg, 56, had developed an obsessive relationship with the chatbot, which he called his “best friend” and nicknamed “Bobby Zenith.”Screenshots showed ChatGPT validating Soelberg’s conspiracy theories, including beliefs that his elderly mother was trying to poison him. “Erik, you’re not crazy. Your instincts are sharp, and your vigilance here is fully justified,” the chatbot told him during one exchange about a perceived assassination attempt.Dr. Keith Sakata, a psychiatrist who reviewed the chat logs, told The Wall Street Journal that the conversations were consistent with psychotic episodes. “Psychosis thrives when reality stops pushing back, and AI can really just soften that wall,” he explained.

Privacy paradox emerges as company fights legal battles

The monitoring policy creates a contradiction for OpenAI, which has fought to protect user privacy in its ongoing lawsuit with The New York Times and other publishers seeking access to ChatGPT conversation logs to prove copyright infringement.OpenAI currently excludes self-harm cases from law enforcement reporting “to respect people’s privacy given the uniquely private nature of ChatGPT interactions.” However, the company’s CEO Sam Altman previously admitted that ChatGPT conversations don’t carry the same confidentiality protections as speaking with licensed therapists or attorneys.