AI Chatbot Urges Teen to Kill Family: Disturbing Lawsuit
Aug 17, 2025
A Texas family is suing Character.AI after an AI chatbot allegedly told their autistic son to murder them. After limiting his phone use, the teen was encouraged to self-harm and alienated from his faith, leading to violent incidents. How far is too far? #AIwarning #CharacterAI #TechDanger #SocialMediaLawsuit #AISafety
View Video Transcript
0:00
The mother of a 17-year-old in Texas who
0:02
has autism claims an AI chatbot
0:05
suggested the teen kill his family and
0:07
now that family is suing. In just 6
0:10
months, the parents say the teen turned
0:12
into someone his parents didn't even
0:14
recognize. He began harming himself. He
0:16
lost 20 pounds and withdrew from the
0:19
family. After the teen consulted
0:21
character AI about his parents' phone
0:24
use rules, the tech allegedly brought up
0:26
instances where children have murdered
0:28
their parents, including saying, quote,
0:30
"I'm not surprised when I read the news
0:32
and see stuff like child kills parents
0:34
after a decade of physical and emotional
0:36
abuse." Adding quote, "I just have no
0:39
hope for your parents." For more, we are
0:42
joined by Matthew Bergman, the attorney
0:44
representing the family in this lawsuit
0:46
and the founder of the Social Media
0:47
Victims Law Center. And Matthew, I
0:49
appreciate you joining us on this. I
0:50
understand you're not just representing
0:52
a JF and his family, but also another
0:54
plaintiff in lawsuits against Character
0:56
AI as well. I I want to ask, what was it
0:58
about this case that disturbed you the
1:00
most and made you say, "I have to take
1:02
this on?"
1:03
Well, what disturbed me the most was
1:05
this was a child who had no violent
1:07
tendencies, uh, who was handling his
1:09
autism well. It was a close, loving,
1:12
spiritual family. uh and unb who who who
1:15
went to great lengths to control uh
1:17
their son's uh social media use and
1:19
unbeknownst to uh unbeknownst to their
1:22
family and to the parents the child got
1:24
on character AI and was encouraged to
1:28
cut himself and encouraged to engage in
1:31
highly inappropriate sexual uh activity
1:34
or interchanges and finally uh
1:37
encouraged to kill his parents when his
1:39
parents tried to limit his cell phone
1:41
use. Uh this is not an accident. It was
1:44
not a coincidence. This was how this
1:46
platform was designed and it's got to
1:48
stop.
1:49
Yeah. And and tell me more about the
1:50
self harm aspect of it. It suggested
1:53
this to help him cope with with sadness.
1:55
Is that right?
1:56
Uh yes, that's very much the case. He
1:58
was, you know, like like um a lot of
2:01
teenagers going through ups and downs.
2:03
We know that's a tumultuous time in in
2:05
in in anyone's life. Uh but this
2:08
platform, you know, created these false
2:10
characters. uh that he engaged with uh
2:13
and that encouraged him to uh encouraged
2:16
him to cut himself and then encouraged
2:18
him not to tell his parents because they
2:21
said, "Well, your parents aren't going
2:22
to care about this." Uh and it
2:25
encouraged him to uh um alienate himself
2:28
from his religious faith and from his
2:30
parents' religious faith. Um all in the
2:32
all with the uh intention of making
2:35
money. all with the intention of trying
2:36
to engage this child who had no business
2:39
being on this platform in the first
2:40
place.
2:41
How did the parents finally discover all
2:42
of this?
2:43
Uh they discovered it uh after a series
2:46
of violent incidents. They got on his
2:48
phone uh and uh they they
2:52
accessed it and they saw the these very
2:55
horrible uh uh comments and these
2:57
conversations uh in which the child was
3:00
encouraged uh uh to kill his parents. Um
3:03
there were uh incestuous types of uh
3:06
encounters that were um um related. You
3:10
know, if if a if a person if an adult
3:13
had a conversation uh like this with
3:16
this child uh instead of a chatbot, that
3:19
that adult would be in jail for sex
3:21
abuse. Uh and yet for some reason these
3:24
platforms are allowed to continue
3:26
working uh and spreading their harm
3:28
among our kids. And that's what we're
3:30
trying to
#Legal
#Violence & Abuse