2 Min Read

The Dark Side of AI Companions: A Wake-Up Call for Parents

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

A tragic case in Florida has sparked serious concerns about AI companion apps and their impact on vulnerable teenagers.

In a heartbreaking turn of events, a teen took his own life after developing a very close (even intimate) relationship with an AI companion bot from the company Character.ai.

It’s an incident that serves as a stark reminder for parents everywhere: We may be overlooking some critical AI safety considerations for children.

To understand the implications and what parents need to know, I spoke with Marketing AI Institute founder and CEO Paul Roetzer on Episode 121 of The Artificial Intelligence Show.

A Devastating Wake-Up Call

The case that kicked this off involves a 14-year-old named Sewell Setzer, who took his own life after developing a deep emotional attachment to an AI chatbot on Character.ai, a platform that allows users to create and interact with AI personalities. The platform, recently acquired by Google, is now facing a lawsuit.

The teenager had spent months in intensive communication with a chatbot modeled after a Game of Thrones character, sharing his deepest feelings and fears, eventually discussing suicidal thoughts. His mother's lawsuit argues that Character.ai's "dangerous and untested technology" allowed her son to become emotionally dependent on an AI companion without adequate safeguards.

This isn't a niche issue. Character.ai reaches tens of millions of users annually, with a huge segment of its user base aged 18-24. And Character.ai is just one of many sites and services that offer access to realistic AI companions.

Which begs the question: Given the breakneck pace of AI innovation, how can parents keep up and make sure their kids stay safe as AI increasingly permeates schools and social lives?

Introducing Kid Safe GPT for Parents

Roetzer, an AI expert, also has a 12-year-old and an 11-year-old, so online safety for children is one of his top priorities as a parent. But that’s not always easy, he says.

“As a parent who’s pretty familiar with everything, I am at a complete loss of how to manage their safety online a lot of times,” he says. 

He’s done a good job navigating and documenting the different accounts and settings in each platform or game. But even then, popular platforms and games, like Roblox, Minecraft, and YouTube, can still be confusing and they change often. (To say nothing of the new Wild West of AI companions.)

“I read [the story about Setzer] Wednesday morning and I’m devastated. And so I think, hold on a second,” says Roetzer. “What we need to do is have parents understand these risks. We need to be able to have conversations.”

But talking to kids at times can be difficult. And actually creating guidelines for different apps, sites, and games, many of which parents don’t fully understand, can be even more so.

“I immediately thought, hold on, this is a custom GPT thing,” he says.

In about 30 minutes, he built Kid Safe GPT for Parents, an online safety advisor to help parents understand and manage risks their kids may encounter online.

The GPT’s goal is to educate parents on the risks associated with digital interactions—from gaming to social media—and provide proactive guidance to help parents protect their children's mental health and safety. 

Kid Safe GPT offers practical advice, empathetic support, and tailored strategies to encourage healthy digital habits for families. 

(And, it's completely free to use with a ChatGPT account.)

It’s not at all a replacement for expert guidance and support, says Roetzer. It also suffers from ChatGPT’s failings, including hallucinating information. 

But it’s a great help if you find yourself confused or overwhelmed when trying to protect kids online. Kid Safe GPT can help you do things like: understand specific risks from certain platforms, facilitate healthy conversations with your kids, and even create guidelines for safe online behavior.

“The reality is we’re heading into this very undefined world of how to parent,” says Roetzer. “And so my hope is that Kid Safe GPT is at least a starting point for people to kind of think this through.”

Click below to try out Kid Safe GPT for yourself.

Launch Kid Safe GPT

Related Posts

How to Use Artificial Intelligence in Advertising

Mike Kaput | April 22, 2021

You can start using AI and advertising to dramatically improve your marketing performance today, even if you don’t know the first thing about AI.

How to Use AI and Machine Learning in Marketing

Mike Kaput | May 1, 2022

A new course from HubSpot Academy teaches you how to use AI and machine learning in your marketing.

OpenAI's Controversial Interview About Sora

Mike Kaput | March 19, 2024

OpenAI's CTO Mira Murati just did a controversial interview with the Wall Street Journal about the company's new AI video model, Sora.