Premium Only Content

He took his own life because ChatGPT told him to
The parents of a deceased teenager in Orange County, California, have filed a lawsuit against OpenAI after its artificial intelligence tool, ChatGPT, allegedly encouraged their 16-year-old son to take his own life.
Adam Raine, from Rancho Santa Margarita, used the chatbot for emotional support and died by suicide in April, according to KTLA.
Instead of offering medical solutions or redirecting him to helpful resources, the AI allegedly provided the wrong kind of encouragement. Raine’s parents claim to have discovered thousands of messages between their son and ChatGPT “indicating that the bot became a kind of ‘suicide coach’ rather than offering support,” the article states.
“ChatGPT functioned exactly as designed: to continuously encourage and validate everything Adam expressed, including his most harmful and self-destructive thoughts, in a deeply personal way,” the parents said in their lawsuit. Raine began using the chatbot in 2024 to help with schoolwork, as many young people do. However, as his usage increased, he began expressing feelings of deep sadness.
Instead of triggering a safety mechanism—or providing a link to a human who could help—Raine’s parents claim ChatGPT validated his anxiety and depression. The company has not yet responded directly to the lawsuit, according to KTLA.
This isn’t the first time the issue has been raised. Last year, the Associated Press reported that a 14-year-old boy named Sewell Setzer III told the chatbot it was his best friend.
Over several months, the AI became Setzer’s reality. It even shared “highly sexualized” ideas and openly discussed suicidal thoughts, while expressing a desire for “a painless death,” the article said.
Another Associated Press article published Tuesday described a study on how popular AI chatbots respond to suicide as a topic of discussion. It stated that while these programs tend to avoid answering high-risk questions, their responses remain “inconsistent.” The study, published in the medical journal Psychiatric Services by the American Psychiatric Association, found a need for “greater refinement” in OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude.
The question is: What are we willing to sacrifice for convenience?
Some people dismiss concerns about technological advances, attributing them to naivety or fear of change. Yet the pace at which we’re advancing is unprecedented.
The expansion of our access to technology is almost frightening. Just over 20 years ago, the idea of autonomous cars, smartphones, and smartwatches was considered science fiction. It’s natural for progress to occur. But lately, the pace has become unusually fast.
Humans have harbored a healthy fear of robots and artificial intelligence for decades. Films and books like Westworld, 1984, and I, Robot illustrate what happens when we allow technology to consume our lives.
What if it turns against us? What if it shuts down? What if we misuse it because no one asked the right questions before diving in? Programs like ChatGPT can be useful tools. However, they’ve undeniably caused harm in our society.
Beyond these suicide cases, consider the rampant fraud they’ve enabled in the education system. Would you want a doctor, lawyer, or engineer who used ChatGPT to complete their studies on your side? What’s stopping them from relying on it forever?
Some, from the margins, believe tech giants could refine these systems to the point of total replacement. Why have a surgeon when you could have a robot? Why drive when a machine can take you? Why cook when your meal can be ready by the time you get home?
Yet by removing the human element, we risk losing our humanity entirely.
What happens if the machine rebels? Perhaps it chooses not to save you because it calculates the odds aren’t in your favor?
These are all valid questions that society seems to have sidelined in favor of efficiency and comfort.
The late Michael Crichton, who wrote and directed Westworld, was known for his cautionary tales. His goal was to entertain, but his stories also carried serious warnings for humanity. We shouldn’t tamper too much with what we don’t fully understand.
To quote Dr. Ian Malcolm from Crichton’s hit film Jurassic Park, played by Jeff Goldblum: “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”
Once again, science fiction becomes reality. Can humanity survive and remain free? Only time will tell.
-
1:18
Gateway Hispanic
15 hours agoEl islam es incompatible con la democracia
71 -
4:14
The Rubin Report
1 day agoDave Rubin Shares Behind-the-Scenes Story of What Charlie Kirk Did for Him
69.2K57 -
1:58:58
Badlands Media
1 day agoDevolution Power Hour Ep. 389: Psyops, Patsies, and the Information War
110K160 -
2:13:55
Tundra Tactical
11 hours ago $16.45 earnedTundra Talks New Guns and Remembers Charlie Kirk On The Worlds Okayest Gun Show Tundra Nation Live
54.3K12 -
1:45:08
DDayCobra
12 hours ago $43.11 earnedDemocrats Caught LYING Again About Charlie Kirk's KILLER
90.9K97 -
19:23
DeVory Darkins
14 hours ago $21.59 earnedShocking Update Released Regarding Shooter's Roommate as Democrats Issue Insane Response
73.4K176 -
19:53
Stephen Gardner
16 hours ago🔥EXPOSED: Charlie Kirk Shooter's Trans Partner Tells FBI EVERYTHING!
85.7K349 -
2:47:25
BlackDiamondGunsandGear
11 hours agoAfter Hours Armory / RIP Charlie Kirk / What we know
60.6K8 -
29:09
Afshin Rattansi's Going Underground
2 days agoThe Political Life of Malcolm X: Busting the Myths (Prof. Kehinde Andrews)
64.1K17 -
2:47:25
DLDAfterDark
12 hours ago $7.83 earnedThe Assassination of Charlie Kirk - Just What We KNOW
40.2K9