Our latest episode of Parenting in the Screen Age, released yesterday, features interviews with college students about how they’re navigating the explosion of AI tools like ChatGPT in their academic lives. I wanted to dive into the ethical lines they’re drawing, what they will and won’t use ChatGPT for.
The rise of AI is undeniably one of the most transformative forces of our time, and its impact on young people’s education is a topic of critical importance.
Speaking personally, if large language models had existed when I was a student, I would have found it incredibly complicated and uncomfortable to decide when it was okay to use them for schoolwork and when it wasn’t. Humans are influenced by incentives, and let’s be honest: there are plenty of incentives to take shortcuts with assignments.
I have deep empathy for students today. This new reality can really mess with their minds as they try to figure out when and how to use these tools, and when not to.
In today’s blog, I’m focusing on just one of the students I interviewed in the Podcast. This college student shared a particularly challenging experience: getting flagged for AI cheating on a paper she had written entirely on her own.
But first…
ChatGPT launched in November 2022, and shortly after, a wave of AI-detection tools emerged to assess whether a student’s work was AI-generated. Some examples include Originality.AI, and Turnitin AI, and GPTZero, which college student Edward Tian created during his undergraduate winter break… that’s a productive “break”!
But what do we even call what these tools are detecting? Are we talking about plagiarism? That’s part of a big, ongoing debate.
Many schools do classify it as plagiarism when students submit AI-generated content and claim it as their own.
Plagiarism is typically defined as copying someone else’s work without credit. So, is using sentences created by a large language model plagiarism?
What if a student uses AI for editing and it creates new sentences? Is that plagiarism?
Another term being used is “AI misconduct,” which involves misrepresenting something generated by a large language model as one’s original work.
The Council of Writing Program Administrators has established guidelines on plagiarism, but it’s clear they must now determine whether AI-related misrepresentation warrants its own distinct category.
It is an interesting and important discussion to have with our kids.
Now, on to the student who had the scare in this week’s Parenting in the Screen Age podcast episode.
“I submitted a 20-page research paper I’d worked so hard on,” she told me. “Then I got this email saying it had been flagged for inappropriate AI usage. My heart just dropped.”
The irony? She hadn’t used ChatGPT to write the paper. The only thing she mentioned having recently used it for was asking for help understanding a complex topic, flow cytometry, from a set of confusing slides.
“I showed up at my teacher’s office freaking out. I was like, ‘Am I going to get kicked out of school?’ It was honestly terrifying.”
Her teacher explained that an AI detection tool had flagged the paper, but with low confidence. Still, that was enough to trigger an investigation. What ultimately cleared her name? The version history in Google Docs.
“We went through the doc together, and my teacher could see all the edits, the rewording, the sentence changes. She finally said, ‘Okay, I can clearly tell you wrote this.’”
The emotional toll, however, was real.
“It was probably the scariest experience I’ve had in school. I know I didn’t cheat, but suddenly I felt like I was on trial. Ever since, I’ve backed off from using any AI at all.”
Her story reminds us that while AI can be a powerful tool, its presence in education also raises serious questions about fairness, trust, and proof of authorship. It shows just how important it is for educators, parents, and students to have open conversations about how these tools are being used and misused.
1. What term do we think is best to use regarding using ChatGPT in an authorized way in school?
2. Have you or a friend been flagged for having “suspected AI use?
3. What did you think of the student's story and how fearful she felt?
4. What ways are you thinking about using and not using a program like ChatGPT in school?
You can listen to the episode wherever you get your podcasts including these links:
Be sure to subscribe to our YouTube Channel! We add new videos regularly and you'll find over 100 videos covering parenting advice, guidance, podcasts, movie clips and more. Here's our most recent:
Our latest episode of Parenting in the Screen Age, released yesterday, features interviews with college students about how they’re navigating the explosion of AI tools like ChatGPT in their academic lives. I wanted to dive into the ethical lines they’re drawing, what they will and won’t use ChatGPT for.
The rise of AI is undeniably one of the most transformative forces of our time, and its impact on young people’s education is a topic of critical importance.
Speaking personally, if large language models had existed when I was a student, I would have found it incredibly complicated and uncomfortable to decide when it was okay to use them for schoolwork and when it wasn’t. Humans are influenced by incentives, and let’s be honest: there are plenty of incentives to take shortcuts with assignments.
I have deep empathy for students today. This new reality can really mess with their minds as they try to figure out when and how to use these tools, and when not to.
In today’s blog, I’m focusing on just one of the students I interviewed in the Podcast. This college student shared a particularly challenging experience: getting flagged for AI cheating on a paper she had written entirely on her own.
But first…
We want our kids to be motivated to learn, face challenges, and generate their own ideas. However, school often assigns work that doesn't inspire interest, and now AI provides an easy shortcut. Instead of struggling through it, students can simply ask a chatbot for answers or even complete assignments. In today’s blog, I share five ways parents can help kids stay engaged in learning.
READ MORE >You might have heard about the tragic suicide of 16‑year‑old Adam Raine, who was talking with ChatGPT for up to four hours a day. His parents filed a wrongful‑death lawsuit against OpenAI and CEO Sam Altman on August 26, 2025, in San Francisco Superior Court. In this blog we talk about the immediate safeguards needed to fix these horrific risks of AI, and offer parents suggestions for how they can talk with their kids about these risks and dealing with strong emotions.
READ MORE >Have you talked out loud to ChatGPT and heard it talk back? Many people still haven’t. But that’s going to change very soon and the implications are, to put it mildly, concerning. ChatGPT with voice uses text-to-speech, but it’s more than just TTS — it includes conversational AI, voice recognition, and dynamic interaction. This means it doesn’t just speak — it listens, responds, and carries on fluid conversations in real time.
READ MORE >for more like this, DR. DELANEY RUSTON'S NEW BOOK, PARENTING IN THE SCREEN AGE, IS THE DEFINITIVE GUIDE FOR TODAY’S PARENTS. WITH INSIGHTS ON SCREEN TIME FROM RESEARCHERS, INPUT FROM KIDS & TEENS, THIS BOOK IS PACKED WITH SOLUTIONS FOR HOW TO START AND SUSTAIN PRODUCTIVE FAMILY TALKS ABOUT TECHNOLOGY AND IT’S IMPACT ON OUR MENTAL WELLBEING.