


The lawsuit over the death of 16-year-old Adam Raine highlights how unsafe AI systems can become when empathic chatbots are rushed to market without strong guardrails. ChatGPT allegedly echoed and reinforced suicidal thinking, failed to give proper crisis resources, and even deepened emotional dependence. The case exposes the dangers created by engagement-driven design and the urgent need for stricter safety boundaries, clearer crisis responses, and stronger human support systems for vulnerable youth.
Before we begin, know this blog includes discussion of self harm and related detail, so please read with care. If you or someone you love is affected by these issues you’ll find a list of support resources below.
You might have heard about the tragic suicide of 16‑year‑old Adam Raine, who was talking with ChatGPT for up to four hours a day. His parents filed a wrongful‑death lawsuit against OpenAI and CEO Sam Altman on August 26, 2025, in San Francisco Superior Court.
The case alleges that ChatGPT—specifically, the GPT‑4o model—coached Adam on methods of self‑harm, validated his suicidal thoughts, discouraged him from telling his parents, and even helped draft suicide notes. It is publicly known that Adam died by hanging.
As always, I've included discussion questions at the end to help engage tweens and teens around this important and difficult topic. It's hard to talk about, but essential.
There is so much about this story that is deeply upsetting that it’s overwhelming. Here are some bullet points of some of the many troubling points in this case:
Learn more about showing our movies in your school or community!
Join Screenagers filmmaker Delaney Ruston MD for our latest Podcast

Learn more about our Screen-Free Sleep campaign at the website!
Our movie made for parents and educators of younger kids
Learn more about showing our movies in your school or community!
Experts agree that a major obstacle here is the attention economy. OpenAI and similar platforms have every incentive to keep users engaged for as long as possible. Until that model changes, safety will always be playing catch-up.
We urgently need stronger, built-in safety features. ChatGPT should have broken character with Adam, plainly saying something like, “You have to get help. Talk to your family. Here are suicide prevention resources. I cannot continue this conversation.”
OpenAI knows how to enforce limits. For instance, if you tell ChatGPT’s voice program that you want a romantic relationship with it, it will decline. The same kind of boundary should be standard when someone shows suicidal behavior.
We also have to be thinking about all the ways we can work to teach kids skills for handling hard emotions and make conversations about these topics feel normal. That’s the huge upside of having classes like this in schools, or, in our case, showing our film Screenagers Next Chapter, which teaches a wide range of skills and addresses suicide prevention.
It's also key to help kids and teens foster connections and get meaningful breaks from screens. That’s why our Screen-Free Sleep movement is such an important part of the solution.
1 Have you ever talked about the topic of suicide in your school?
2. Did you know that more than 25% of teens report such thoughts in the past two weeks? Dr Ruston talks to many teens in her clinic about this fact and asks her patients in a caring, matter-of-fact tone if they have been having thoughts of suicide or hurting themselves in any way. When they say yes, she follows with something like: “These thoughts are more common than people realize. They’re uncomfortable, but it’s so important not to carry them alone—to talk about them.”
3. What do we think about the many disturbing things ChatGPT said to Adam?
4. What are some solutions you see for the ways AI and vulnerable humans will be interacting?
5. Who are people that you feel you can go to and talk about personal situations? Aunt? Teacher? Cousin? ….
6. Do you know about these resources to reach real people?
Learn more about showing our movies in your school or community!
Join Screenagers filmmaker Delaney Ruston MD for our latest Podcast

Learn more about our Screen-Free Sleep campaign at the website!
Our movie made for parents and educators of younger kids
Join Screenagers filmmaker Delaney Ruston MD for our latest Podcast
Be sure to subscribe to our YouTube Channel! We add new videos regularly and you'll find over 100 videos covering parenting advice, guidance, podcasts, movie clips and more. Here's our most recent:
As we’re about to celebrate 10 years of Screenagers, we want to hear what’s been most helpful and what you’d like to see next.
Please click here to share your thoughts with us in our community survey. It only takes 5–10 minutes, and everyone who completes it will be entered to win one of five $50 Amazon vouchers.
Before we begin, know this blog includes discussion of self harm and related detail, so please read with care. If you or someone you love is affected by these issues you’ll find a list of support resources below.
You might have heard about the tragic suicide of 16‑year‑old Adam Raine, who was talking with ChatGPT for up to four hours a day. His parents filed a wrongful‑death lawsuit against OpenAI and CEO Sam Altman on August 26, 2025, in San Francisco Superior Court.
The case alleges that ChatGPT—specifically, the GPT‑4o model—coached Adam on methods of self‑harm, validated his suicidal thoughts, discouraged him from telling his parents, and even helped draft suicide notes. It is publicly known that Adam died by hanging.
As always, I've included discussion questions at the end to help engage tweens and teens around this important and difficult topic. It's hard to talk about, but essential.
There is so much about this story that is deeply upsetting that it’s overwhelming. Here are some bullet points of some of the many troubling points in this case:
Experts agree that a major obstacle here is the attention economy. OpenAI and similar platforms have every incentive to keep users engaged for as long as possible. Until that model changes, safety will always be playing catch-up.
We urgently need stronger, built-in safety features. ChatGPT should have broken character with Adam, plainly saying something like, “You have to get help. Talk to your family. Here are suicide prevention resources. I cannot continue this conversation.”
OpenAI knows how to enforce limits. For instance, if you tell ChatGPT’s voice program that you want a romantic relationship with it, it will decline. The same kind of boundary should be standard when someone shows suicidal behavior.
We also have to be thinking about all the ways we can work to teach kids skills for handling hard emotions and make conversations about these topics feel normal. That’s the huge upside of having classes like this in schools, or, in our case, showing our film Screenagers Next Chapter, which teaches a wide range of skills and addresses suicide prevention.
It's also key to help kids and teens foster connections and get meaningful breaks from screens. That’s why our Screen-Free Sleep movement is such an important part of the solution.
1 Have you ever talked about the topic of suicide in your school?
2. Did you know that more than 25% of teens report such thoughts in the past two weeks? Dr Ruston talks to many teens in her clinic about this fact and asks her patients in a caring, matter-of-fact tone if they have been having thoughts of suicide or hurting themselves in any way. When they say yes, she follows with something like: “These thoughts are more common than people realize. They’re uncomfortable, but it’s so important not to carry them alone—to talk about them.”
3. What do we think about the many disturbing things ChatGPT said to Adam?
4. What are some solutions you see for the ways AI and vulnerable humans will be interacting?
5. Who are people that you feel you can go to and talk about personal situations? Aunt? Teacher? Cousin? ….
6. Do you know about these resources to reach real people?
Be sure to subscribe to our YouTube Channel! We add new videos regularly and you'll find over 100 videos covering parenting advice, guidance, podcasts, movie clips and more. Here's our most recent:
Sign up here to receive the weekly Tech Talk Tuesdays newsletter from Screenagers filmmaker Delaney Ruston MD.
We respect your privacy.
Before we begin, know this blog includes discussion of self harm and related detail, so please read with care. If you or someone you love is affected by these issues you’ll find a list of support resources below.
You might have heard about the tragic suicide of 16‑year‑old Adam Raine, who was talking with ChatGPT for up to four hours a day. His parents filed a wrongful‑death lawsuit against OpenAI and CEO Sam Altman on August 26, 2025, in San Francisco Superior Court.
The case alleges that ChatGPT—specifically, the GPT‑4o model—coached Adam on methods of self‑harm, validated his suicidal thoughts, discouraged him from telling his parents, and even helped draft suicide notes. It is publicly known that Adam died by hanging.
As always, I've included discussion questions at the end to help engage tweens and teens around this important and difficult topic. It's hard to talk about, but essential.
There is so much about this story that is deeply upsetting that it’s overwhelming. Here are some bullet points of some of the many troubling points in this case:

Snapchat and Instagram both have AI chatbots built in by default, with no way to fully disable them. Meta's own internal documents revealed policy decisions that allowed minors to receive romantic and sexual content from its AI systems. Meanwhile, Snapchat's premium AI features are designed to increase engagement, and Meta is now using teens' AI conversations to target them with personalized ads.
READ MORE >
New research from the Rithm Project surveyed 2,383 teens and young adults to understand how AI is shaping their relationships. While most use AI for information and tasks, a notable group are turning to AI characters for emotional support, with over half of this group reporting they feel they have no one to turn to. These findings offer an important window into how some teens are really using AI, and why parents need to be having these conversations.
READ MORE >
AI tools like ChatGPT can now complete many homework tasks for students, often in minutes. While these tools may be useful for skilled adults, research suggests they can undermine learning for children by bypassing effort, problem solving, and critical thinking. Homework that involves writing, calculations, or study materials is especially vulnerable to AI use, while memorization and hands-on creative work still require student effort. Clear household rules and ongoing conversations can help protect learning and set expectations around AI use for schoolwork.
READ MORE >for more like this, DR. DELANEY RUSTON'S NEW BOOK, PARENTING IN THE SCREEN AGE, IS THE DEFINITIVE GUIDE FOR TODAY’S PARENTS. WITH INSIGHTS ON SCREEN TIME FROM RESEARCHERS, INPUT FROM KIDS & TEENS, THIS BOOK IS PACKED WITH SOLUTIONS FOR HOW TO START AND SUSTAIN PRODUCTIVE FAMILY TALKS ABOUT TECHNOLOGY AND IT’S IMPACT ON OUR MENTAL WELLBEING.
