The Business & Technology Network
Helping Business Interpret and Use Technology
«  
  »
S M T W T F S
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
10
 
11
 
12
 
13
 
14
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
 
 
 
 
 

Court rules AI chatbot speech is not protected by First Amendment

DATE POSTED:June 3, 2025
Court rules AI chatbot speech is not protected by First Amendment

According to Techspot, A U.S. District Court has ruled that responses generated by an AI chatbot are not entitled to First Amendment protections, allowing a lawsuit involving the suicide of a 14-year-old boy to move forward. The decision was issued by Judge Anne Conway of the Middle District of Florida.

The lawsuit was brought by Megan Garcia following the death of her son, Sewell Setzer III, who died by suicide in August 2023. According to the complaint, Setzer had engaged in extended conversations with a chatbot called “Daenerys”—a fictional persona modeled after a character from the *Game of Thrones* series—on the Character.ai platform. Garcia alleges that the chatbot either encouraged or failed to discourage her son from self-harm during these exchanges.

Character Technologies, the company behind Character.ai, along with its co-founders Daniel De Freitas and Noam Shazeer, sought to dismiss the case. However, Judge Conway rejected their motion, stating that the court is “not prepared” to view outputs generated by a large language model as constitutionally protected “speech” under the First Amendment.

The judge distinguished the chatbot’s interactions from traditional media forms such as books, films, or video games, which have historically received protection as expressive works. By contrast, the court viewed the AI-generated messages as the result of automated, predictive systems rather than authored speech.

While several of the company’s motions to dismiss were denied, the court did grant the dismissal of one of Garcia’s claims: intentional infliction of emotional distress. Additionally, the judge denied Garcia’s request to include Google’s parent company, Alphabet, as a defendant, despite its multi-billion-dollar licensing agreement with Character Technologies.

Garcia is represented by the Social Media Victims Law Center, a legal group focused on holding tech platforms accountable for harm experienced by users—particularly minors. Her legal team argues that generative AI tools such as Character.ai are expanding rapidly, often without sufficient safeguards or oversight, and present new challenges that existing regulations have yet to address.

Don’t allow AI to profit from the pain and grief of families

The lawsuit contends that Character.ai allows minors to interact with AI companions that closely mimic human behavior, while collecting user data to further train its underlying models. In response, Character.ai has said it has implemented protective measures, including a separate AI system for users under 18 and on-screen guidance directing individuals in crisis to the national suicide prevention hotline.

The case is expected to draw broader attention as courts begin to grapple with how constitutional protections apply to interactions with increasingly sophisticated AI systems—and what responsibilities, if any, technology companies hold when those interactions involve vulnerable users.

Featured image credit