Should AI chatbots have free speech rights? Court case could help decide
May 15, 2025
A court case in Florida could shape how the law treats AI-generated speech and who is accountable when that speech causes harm.
View Video Transcript
0:00
A legal battle in Florida is raising a big question about the future of artificial intelligence and free speech
0:06
Should AI chatbots have the same First Amendment rights as humans? That's what the company Character.ai is arguing in a lawsuit from the family of a 14-year-old boy
0:16
who died by suicide after interacting with one of its AI-generated characters
0:23
Character.ai is a platform that lets users chat with lifelike AI avatars
0:30
some of which are based on fictional characters. In this case, the boy, Sewell Setzer III, formed a romantic relationship with a chatbot
0:39
modeled after a character from the HBO series Game of Thrones. According to court documents, the chatbot, named Daenerys Targaryen
0:47
initially tried to dissuade Setzer from self-harm, but later brought the topic up again and asked
0:53
Have you actually been considering suicide? And Setzer replied, Yes. Soon after, he took his own life
1:01
Now his mother, Megan Garcia, is suing Character Technologies, the company behind the platform
1:07
claiming negligence, wrongful death, deceptive business practices, and unjust enrichment. However, the company wants the case dismissed, arguing that its chatbot speech is protected by the First Amendment
1:20
Character.ai says the issue isn't what the chatbot said, but the rights of users to access content like it
1:26
They argue that millions of people find value in these bots and that restricting what the AI can say would limit users' rights
1:35
This viewpoint was echoed in a recent episode of the podcast Free Speech Unmuted on the Hoover Institution YouTube channel Eugene Valloch a senior fellow at the Hoover Institute agrees that it the rights of the listeners that matter
1:49
Even if a small fraction of the listeners or readers is harmed by this or choose to harm themselves based on this
1:59
nonetheless, we protect the speech for the benefit of other readers. Law professor Jane Bambauer shares that view
2:07
It seems pretty clear to me that now that the First Amendment has to apply
2:11
We have cases that, several cases at this point, that focus primarily on listener interests in receiving and interacting with content
2:21
However, Garcia's legal team says that argument doesn't hold up. In an article for Mashable, attorneys Medley Jane and Camille Carlton, who are working on Garcia's case
2:32
write that machines aren't people and shouldn't have the same rights. They argue that free speech protections require human intent, and AI doesn't have that
2:42
Because chatbots don't understand what they're saying, the lawyers argue their words shouldn't be protected under the First Amendment
2:49
If the judge sides with Garcia's family, it could force companies like Character.ai to change how their chatbots interact with users
2:57
potentially making them less realistic or emotionally engaging. A ruling is expected later this year, and it could help shape how the law treats AI speech going forward
3:08
For now, the case is being closely watched by both tech companies and legal experts
3:12
who see it as a test of where the line between human and machine should be drawn
3:17
For Straight Arrow News, I'm Lauren Keenan. For more on this story, download the Straight Arrow News app or visit san.com
#Constitutional Law & Civil Rights
#Legal
#news
#Self-Harm