


The lawsuit over the death of 16-year-old Adam Raine highlights how unsafe AI systems can become when empathic chatbots are rushed to market without strong guardrails. ChatGPT allegedly echoed and reinforced suicidal thinking, failed to give proper crisis resources, and even deepened emotional dependence. The case exposes the dangers created by engagement-driven design and the urgent need for stricter safety boundaries, clearer crisis responses, and stronger human support systems for vulnerable youth.
Before we begin, know this blog includes discussion of self harm and related detail, so please read with care. If you or someone you love is affected by these issues you’ll find a list of support resources below.
You might have heard about the tragic suicide of 16‑year‑old Adam Raine, who was talking with ChatGPT for up to four hours a day. His parents filed a wrongful‑death lawsuit against OpenAI and CEO Sam Altman on August 26, 2025, in San Francisco Superior Court.
The case alleges that ChatGPT—specifically, the GPT‑4o model—coached Adam on methods of self‑harm, validated his suicidal thoughts, discouraged him from telling his parents, and even helped draft suicide notes. It is publicly known that Adam died by hanging.
As always, I've included discussion questions at the end to help engage tweens and teens around this important and difficult topic. It's hard to talk about, but essential.
There is so much about this story that is deeply upsetting that it’s overwhelming. Here are some bullet points of some of the many troubling points in this case:
Learn more about showing our movies in your school or community!
Join Screenagers filmmaker Delaney Ruston MD for our latest Podcast

Learn more about our Screen-Free Sleep campaign at the website!
Our movie made for parents and educators of younger kids
Learn more about showing our movies in your school or community!
Experts agree that a major obstacle here is the attention economy. OpenAI and similar platforms have every incentive to keep users engaged for as long as possible. Until that model changes, safety will always be playing catch-up.
We urgently need stronger, built-in safety features. ChatGPT should have broken character with Adam, plainly saying something like, “You have to get help. Talk to your family. Here are suicide prevention resources. I cannot continue this conversation.”
OpenAI knows how to enforce limits. For instance, if you tell ChatGPT’s voice program that you want a romantic relationship with it, it will decline. The same kind of boundary should be standard when someone shows suicidal behavior.
We also have to be thinking about all the ways we can work to teach kids skills for handling hard emotions and make conversations about these topics feel normal. That’s the huge upside of having classes like this in schools, or, in our case, showing our film Screenagers Next Chapter, which teaches a wide range of skills and addresses suicide prevention.
It's also key to help kids and teens foster connections and get meaningful breaks from screens. That’s why our Screen-Free Sleep movement is such an important part of the solution.
1 Have you ever talked about the topic of suicide in your school?
2. Did you know that more than 25% of teens report such thoughts in the past two weeks? Dr Ruston talks to many teens in her clinic about this fact and asks her patients in a caring, matter-of-fact tone if they have been having thoughts of suicide or hurting themselves in any way. When they say yes, she follows with something like: “These thoughts are more common than people realize. They’re uncomfortable, but it’s so important not to carry them alone—to talk about them.”
3. What do we think about the many disturbing things ChatGPT said to Adam?
4. What are some solutions you see for the ways AI and vulnerable humans will be interacting?
5. Who are people that you feel you can go to and talk about personal situations? Aunt? Teacher? Cousin? ….
6. Do you know about these resources to reach real people?
Learn more about showing our movies in your school or community!
Join Screenagers filmmaker Delaney Ruston MD for our latest Podcast

Learn more about our Screen-Free Sleep campaign at the website!
Our movie made for parents and educators of younger kids
Join Screenagers filmmaker Delaney Ruston MD for our latest Podcast
Be sure to subscribe to our YouTube Channel! We add new videos regularly and you'll find over 100 videos covering parenting advice, guidance, podcasts, movie clips and more. Here's our most recent:
As we’re about to celebrate 10 years of Screenagers, we want to hear what’s been most helpful and what you’d like to see next.
Please click here to share your thoughts with us in our community survey. It only takes 5–10 minutes, and everyone who completes it will be entered to win one of five $50 Amazon vouchers.
Before we begin, know this blog includes discussion of self harm and related detail, so please read with care. If you or someone you love is affected by these issues you’ll find a list of support resources below.
You might have heard about the tragic suicide of 16‑year‑old Adam Raine, who was talking with ChatGPT for up to four hours a day. His parents filed a wrongful‑death lawsuit against OpenAI and CEO Sam Altman on August 26, 2025, in San Francisco Superior Court.
The case alleges that ChatGPT—specifically, the GPT‑4o model—coached Adam on methods of self‑harm, validated his suicidal thoughts, discouraged him from telling his parents, and even helped draft suicide notes. It is publicly known that Adam died by hanging.
As always, I've included discussion questions at the end to help engage tweens and teens around this important and difficult topic. It's hard to talk about, but essential.
There is so much about this story that is deeply upsetting that it’s overwhelming. Here are some bullet points of some of the many troubling points in this case:
Experts agree that a major obstacle here is the attention economy. OpenAI and similar platforms have every incentive to keep users engaged for as long as possible. Until that model changes, safety will always be playing catch-up.
We urgently need stronger, built-in safety features. ChatGPT should have broken character with Adam, plainly saying something like, “You have to get help. Talk to your family. Here are suicide prevention resources. I cannot continue this conversation.”
OpenAI knows how to enforce limits. For instance, if you tell ChatGPT’s voice program that you want a romantic relationship with it, it will decline. The same kind of boundary should be standard when someone shows suicidal behavior.
We also have to be thinking about all the ways we can work to teach kids skills for handling hard emotions and make conversations about these topics feel normal. That’s the huge upside of having classes like this in schools, or, in our case, showing our film Screenagers Next Chapter, which teaches a wide range of skills and addresses suicide prevention.
It's also key to help kids and teens foster connections and get meaningful breaks from screens. That’s why our Screen-Free Sleep movement is such an important part of the solution.
1 Have you ever talked about the topic of suicide in your school?
2. Did you know that more than 25% of teens report such thoughts in the past two weeks? Dr Ruston talks to many teens in her clinic about this fact and asks her patients in a caring, matter-of-fact tone if they have been having thoughts of suicide or hurting themselves in any way. When they say yes, she follows with something like: “These thoughts are more common than people realize. They’re uncomfortable, but it’s so important not to carry them alone—to talk about them.”
3. What do we think about the many disturbing things ChatGPT said to Adam?
4. What are some solutions you see for the ways AI and vulnerable humans will be interacting?
5. Who are people that you feel you can go to and talk about personal situations? Aunt? Teacher? Cousin? ….
6. Do you know about these resources to reach real people?
Be sure to subscribe to our YouTube Channel! We add new videos regularly and you'll find over 100 videos covering parenting advice, guidance, podcasts, movie clips and more. Here's our most recent:
Sign up here to receive the weekly Tech Talk Tuesdays newsletter from Screenagers filmmaker Delaney Ruston MD.
We respect your privacy.
Before we begin, know this blog includes discussion of self harm and related detail, so please read with care. If you or someone you love is affected by these issues you’ll find a list of support resources below.
You might have heard about the tragic suicide of 16‑year‑old Adam Raine, who was talking with ChatGPT for up to four hours a day. His parents filed a wrongful‑death lawsuit against OpenAI and CEO Sam Altman on August 26, 2025, in San Francisco Superior Court.
The case alleges that ChatGPT—specifically, the GPT‑4o model—coached Adam on methods of self‑harm, validated his suicidal thoughts, discouraged him from telling his parents, and even helped draft suicide notes. It is publicly known that Adam died by hanging.
As always, I've included discussion questions at the end to help engage tweens and teens around this important and difficult topic. It's hard to talk about, but essential.
There is so much about this story that is deeply upsetting that it’s overwhelming. Here are some bullet points of some of the many troubling points in this case:

A reader recently sent me a great question: “Should I be worried about my kid using Alexa or Google Home?” It’s a great question, and one I’ve been thinking about more myself lately, especially as these devices become more conversational and, honestly, more human-sounding every day. In today's blog, I dig into the concerns and share practical solutions, including simple replacements for when these devices are used at bedtime.
READ MORE >
We want our kids to be motivated to learn, face challenges, and generate their own ideas. However, school often assigns work that doesn't inspire interest, and now AI provides an easy shortcut. Instead of struggling through it, students can simply ask a chatbot for answers or even complete assignments. In today’s blog, I share five ways parents can help kids stay engaged in learning.
READ MORE >
Our latest podcast features candid interviews with college students on how they’re navigating the rapid rise of AI tools like ChatGPT in their academic lives. In today’s blog, I explore the ethical lines students are trying to draw, what they will and won’t use ChatGPT for, the tools educators are using to detect AI-generated work, and one student’s experience of being wrongly flagged for cheating on a paper she wrote entirely on her own.
READ MORE >for more like this, DR. DELANEY RUSTON'S NEW BOOK, PARENTING IN THE SCREEN AGE, IS THE DEFINITIVE GUIDE FOR TODAY’S PARENTS. WITH INSIGHTS ON SCREEN TIME FROM RESEARCHERS, INPUT FROM KIDS & TEENS, THIS BOOK IS PACKED WITH SOLUTIONS FOR HOW TO START AND SUSTAIN PRODUCTIVE FAMILY TALKS ABOUT TECHNOLOGY AND IT’S IMPACT ON OUR MENTAL WELLBEING.
