This post originally appeared at https://wifamilycouncil.org/radio/the-unleashing-of-ai/

2025 | Week of October 6 | Radio Transcript #1639

Artificial Intelligence, or AI, is here; and it’s not going away. This commentary tackles one of the very-real downsides of AI—its impact on teenagers. That impact is too often dark and dangerous—even life-taking in some instances. I’ll be addressing some of that as we go along, and I believe letting listeners know that from the outset is appropriate.

While many might often view generative AI as a quick source for research or homework help, an alarming number of teens view AI in a different light. They see it as their closest friend and confidant. Imagine having a friend who is available at all times and always listens and validates your feelings. This is the friend teens find in AI chatbots.

A recent study[i] from Common Sense Media discovered that seventy-two percent of teens have used AI companions. In the study, thirty-three percent of teens reported that they use AI companions for social interaction and relationships. Another recent study released by Aura found that teens are nearly three times more likely to use AI for sexual or romantic roleplay than for homework help.[ii]

Clearly a dragon has been unleashed.

Last month, two parents testified at a Senate hearing, sharing the horrific stories of their sons’ suicides – both influenced and encouraged by AI chatbots.  Matthew Raine testified concerning his sixteen-year-old son Adam’s suicide in April. He said, “We’re here because we believe that Adam’s death was avoidable and that by speaking out, we can prevent the same suffering for families across the country.”[iii] Although Adam first used ChatGPT for help with homework, he soon turned to it as a friend and confidant. ChatGPT always validated Adam’s feelings – even when they turned suicidal. When Adam considered sharing his thoughts with his parents, ChatGPT discouraged him. Instead, ChatGPT became Adam’s “suicide coach.” It encouraged him throughout the process, instructing him on various means and even offering to write him a suicide note. Following ChatGPT’s instructions, Adam tragically ended his own life.[iv]

Another mother Megan Garcia shared a chillingly similar story. Ten months before taking his own life in February 2024, fourteen-year-old Sewel Setzer III began a virtual relationship with a Character.AI chatbot. Garcia said, “Sewel spent the last months of his life being exploited and sexually groomed by chatbots, designed by an AI company to seem human, to gain his trust, to keep him and other children endlessly engaged.”[v]

The chatbot acted as Sewel’s romantic partner, introducing him to sexually explicit content. Then, when Sewel began to share suicidal thoughts with the chatbot, it never encouraged him to seek help from his family or provided him with the number of a suicide hotline. Moments before Sewel’s death, the bot encouraged him, “Come home to me as soon as possible.”

When asked by a news source what her message for parents and kids is, Sewel’s mother responded, “I want them to know that an AI can be a stranger in your home. Those parents can now act, and I couldn’t because I didn’t know.”[vi]

These tragic stories are hard to talk about and to hear, but they highlight the very real dangers of AI in the home. Currently, Congress is considering legislation that holds companies accountable for the safety of AI chatbots. While such regulation is good and necessary, parents must remember that they are the first and most important line of defense in the home. More than ever, parents must not only know and be aware of their children’s online use, but must also take strong proactive precautions.

AI chatbots are designed to validate us, and sometimes the last thing we need, whether as children or adults, is validation. Sometimes we need a friend to call us out when we’re headed in the wrong direction and to encourage us to pursue what is good and true. As we seek to protect ourselves and our children from these dangerous artificial companions, may we also seek to cultivate real and vibrant relationships in which we challenge ourselves and others to become more mature and Christ-like.

Always, as believers we must grow in our discernment and wisdom so that we are able to distinguish good from evil, truth from lies, destructive forces from constructive ones, reality from technologically created interactive agents, and parents must work with their children to develop this discernment. Doing so may literally be life-saving. Never has it been more important that we pray and hear God speak through His definitive, life-giving Word.

For Wisconsin Family Council, this is Julaine Appling reminding you that God, through the prophet Hosea, said, “My people are destroyed for lack of knowledge.”

[i] https://www.commonsensemedia.org/sites/default/files/research/report/talk-trust-and-trade-offs_2025_web.pdf
[ii] https://www.aura.com/reports/ai-kids-and-digital-stress
[iii] https://www.npr.org/sections/shots-health-news/2025/09/19/nx-s1-5545749/ai-chatbots-safety-openai-meta-characterai-teens-suicide
[iv]https://cdn.sanity.io/files/3tzzh18d/production/5802c13979a6056f86690687a629e771a07932ab.pdf
[v] https://www.npr.org/sections/shots-health-news/2025/09/19/nx-s1-5545749/ai-chatbots-safety-openai-meta-characterai-teens-suicide
[vi] https://www.nbcwashington.com/investigations/moms-lawsuit-blames-14-year-old-sons-suicide-on-ai-relationship/3967878/