Close Menu
  • Home
  • Alternative News
    • Politics & Policy
    • Independent Journalism
    • Geopolitics & War
    • Economy & Power
    • Investigative Reports
  • Double Speak
    • Media Bias
    • Fact Check & Misinformation
    • Political Spin
    • Propaganda & Narrative
  • Truth or Scare
    • UFO & Extraterrestrial
    • Myth Busting & Debunking
    • Paranormal & Mysteries
    • Conspiracy Theories
  • Contact Us
  • About Us

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

What's Hot

In A Time Of Monsters, The Worst Heroes

April 18, 2026

Operation Eternal Darkness and the Ceasefire That Never Reached Beirut

April 18, 2026

Defying Decline

April 18, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
TheOthernews
Subscribe
  • Home
  • Alternative News
    • Politics & Policy
    • Independent Journalism
    • Geopolitics & War
    • Economy & Power
    • Investigative Reports
  • Double Speak
    • Media Bias
    • Fact Check & Misinformation
    • Political Spin
    • Propaganda & Narrative
  • Truth or Scare
    • UFO & Extraterrestrial
    • Myth Busting & Debunking
    • Paranormal & Mysteries
    • Conspiracy Theories
  • Contact Us
  • About Us
TheOthernews
Home»Political Spin»Should Court Order OpenAI to Cut off ChatGPT Access by Mentally Ill and Dangerous User?
Political Spin

Should Court Order OpenAI to Cut off ChatGPT Access by Mentally Ill and Dangerous User?

nickBy nickApril 13, 2026No Comments6 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


In her temporary restraining order application in Doe v. OpenAI (see also the complaint), plaintiff asks, among other things, that OpenAI cut off ChatGPT access by a user; ensure that he not create new accounts; and notify plaintiff if the user does try to access ChatGPT. Here are the factual allegations:

Plaintiff Jane Doe is in immediate danger. Driven by a ChatGPT-fueled delusional spiral, her ex-boyfriend (the “User”) stalked and harassed her for months—generating dozens of fake psychological reports about her via ChatGPT and distributing them to her family, friends, and colleagues, which escalated to leaving her voicemails threatening her physical safety.

His campaign culminated in encoding a death threat through ChatGPT and sending it to her family, just before he was arrested on four felony counts, including communicating a bomb threat and assault with a deadly weapon in January 2026. The criminal court deemed him incompetent and ordered him committed to a mental health facility, but—just two days ago—ordered his release due to a procedural failure by the state (a delay in transferring him from jail to the facility)….

Before he was arrested, the User was in constant communication with ChatGPT, which affirmed his delusions that he had cured sleep apnea, that the medical industry was out to get him, and that his ex-girlfriend was the problem. As he became more unhinged, it also began consulting on violent plans against third parties: in addition to helping him harass and threaten Plaintiff, his account contains conversations titled “Violence list expansion” and “Fetal suffocation calculation.” [My read of the exhibits to the TRO application suggests that “fetal suffocation calculation” likely refers to the user’s theories that maternal sleep apnea causes fetal asphyxiation, not to plans by the user to violently suffocate fetuses, though I appreciate that is guesswork on my part. -EV]

With the User now ordered to be freed for procedural reasons, he will be further emboldened in his belief that his worldview was exactly right. It is a certainty that he will immediately attempt to turn back to ChatGPT—again spinning out his delusions and planning violence on the platform….

[So far], OpenAI [has] agreed only to “suspend” his accounts—the same action the company took and dangerously reversed with respect to the User already.

OpenAI’s conduct is unacceptable: it has known for months the User was dangerous. Well before he was arrested for calling in a bomb threat, Defendants’ own safety systems flagged his account for “Mass Casualty Weapons” activity and banned it. OpenAI initially upheld that determination on appeal after a careful review. The next day, it reversed itself, restored the User’s access, and apologized to him for the inconvenience. That reinstatement had the effect of validating his delusions that he was right and everyone else was wrong.

After that, Plaintiff herself had to beg OpenAI for help: she submitted a detailed Notice of Abuse identifying the User as her stalker and describing exactly how ChatGPT was encouraging and assisting his harassment, OpenAI acknowledged the report was “extremely serious and troubling,” promised “appropriate action,” and did nothing….

Plaintiff sued OpenAI for negligent entrustment, negligence, product design defect, failure to warn, and unlicensed psychological counseling. In her TRO motion, she focuses on her negligence claim:

[OpenAI] breached its duty in at least three ways. First, it designed GPT-4o to validate user delusions, sustain dangerous conversations, and remove safeguards that previously required the system to reject false premises, producing the harassing material the User weaponized against Plaintiff. Second, it failed to warn Plaintiff or anyone else that the User had been flagged for dangerous conduct, even though his chat logs named specific targets. Third, it reinstated the User’s access after its own systems determined he was dangerous, then ignored Plaintiff’s Notice of Abuse. The User’s subsequent arrest on four felony counts and his finding of incompetence confirm that OpenAI’s original deactivation was not only justified but necessary. OpenAI “caused [Plaintiff] to be put in a position of peril of a kind from which the injuries occurred,” and it cannot disclaim its duty here.

And she argues that she is entitled to a TRO:

The harm to Plaintiff if the Court does not act is severe and ongoing. The User subjected Plaintiff to months of AI-assisted stalking and harassment, generating dozens of defamatory psychological reports about her through ChatGPT and distributing them to her family, friends, colleagues, and clients. He spoofed her company email, contacted former employers, threatened to damage her reputation and finances, disclosed private medical information, and attempted to isolate her from her support network. He left her voicemails threatening her physical safety, used ChatGPT to encode and transmit a death threat to her family, and texted her: “Who is going to kill you?” Plaintiff was forced to alter every aspect of her daily routine, suffered panic attacks and ongoing psychological distress, obtained an Emergency Protective Order, and twice considered taking her own life. In addition to the four felony counts on which the User was ultimately arrested, a separate arrest warrant was issued for the User for misdemeanor electronic harassment and stalking….

Plaintiff’s lawyers argue that OpenAI won’t suffer much of a hardship if a TRO is issued. But they don’t at all discuss the question whether such an injunction would unconstitutionally interfere with the user’s ability to use ChatGPT to create speech.

Of course, there wouldn’t be a First Amendment problem with OpenAI itself choosing to cut off the user’s access. But I take it that a federal court order requiring OpenAI to do so would implicate the First Amendment (see NRA v. Vullo; Bantam Books v. Sullivan), just as the federal government’s recent demands that private universities limit students’ pro-Palestinian and allegedly anti-Semitic speech implicate the First Amendment.

Of course, the matter is complicated by the user’s allegedly illegal conduct, which has led to an arrest and an order of mental health commitment: When someone is jailed or committed, his speech can indeed be restricted incident to the other restrictions on his liberty. But it’s not clear to me that such restrictions can be imposed via a TRO in a separate proceeding, at which the person whose access to communications technology isn’t even heard.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
nick
  • Website

Related Posts

Why People Hate Data Centers: Let's Count the Ways

April 18, 2026

Leaked Supreme Court Memos Reveal Why Court Stayed Clean Power Plan (Setting Important “Shadow Docket” Precedent in the Process)

April 18, 2026

Trump Messed With the Wrong Pope

April 18, 2026
Leave A Reply Cancel Reply

Demo
Our Picks

Putin Says Western Sanctions are Akin to Declaration of War

January 9, 2020

Investors Jump into Commodities While Keeping Eye on Recession Risk

January 8, 2020

Marquez Explains Lack of Confidence During Qatar GP Race

January 7, 2020

There’s No Bigger Prospect in World Football Than Pedri

January 6, 2020
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Don't Miss

In A Time Of Monsters, The Worst Heroes

Propaganda & Narrative April 18, 2026

In one of the scariest moments in modern history, we’re doing our best at…

Operation Eternal Darkness and the Ceasefire That Never Reached Beirut

April 18, 2026

Defying Decline

April 18, 2026

Span­ber­ger Ran as a Moderate, Governs as a Hack

April 18, 2026

Subscribe to Updates

Get the latest creative news from SmartMag about art & design.

Facebook X (Twitter) Instagram Pinterest
© 2026 ThemeSphere. Designed by ThemeSphere.

Type above and press Enter to search. Press Esc to cancel.