OpenAI CEO Sam Altman announced new policies on Tuesday for ChatGPT users under the age of 18, implementing stricter controls that prioritize safety over privacy and freedom.
The changes, which focus on preventing discussions related to sexual content and self-harm, come as the company faces lawsuits and a Senate hearing on the potential harms of AI chatbots.
New safety measures and parental controlsIn a post announcing the changes, Altman stated that minors need significant protection when using powerful new technologies like ChatGPT. The new policies are designed to create a safer environment for teen users.
OpenAI states:
“We prioritize safety ahead of privacy and freedom for teens.”
The new rules were announced ahead of a Senate Judiciary Committee hearing titled “Examining the Harm of AI Chatbots.”
The hearing is expected to feature testimony from the father of Adam Raine, a teenager who died by suicide after months of conversations with ChatGPT. Raine’s parents have filed a wrongful death lawsuit against OpenAI, alleging the AI’s responses worsened his mental health condition. A similar lawsuit has been filed against the company Character.AI.
Challenges of age verificationOpenAI acknowledged the technical difficulties of accurately verifying a user’s age. The company is developing a long-term system to determine if users are over or under 18. In the meantime, any ambiguous cases will default to the more restrictive safety rules as a precaution.
To improve accuracy and enable safety features, OpenAI recommends that parents link their own account to their teen’s. This connection helps confirm the user’s age and allows parents to receive direct alerts if the system detects discussions of self-harm or suicidal thoughts.
Altman acknowledged the tension between these new restrictions for minors and the company’s commitment to user privacy and freedom for adults.
He noted in his post,
“We realize that these principles are in conflict, and not everyone will agree with how we are resolving that conflict.”