TECHNOLOGY

OpenAI puts parental controls in ChatGPT but critics say it is a ‘vague promise’

September 03, 2025
The OpenAI logo appears on a mobile phone in front of a computer screen with random binary data, March 9, 2023, in Boston
The OpenAI logo appears on a mobile phone in front of a computer screen with random binary data, March 9, 2023, in Boston

BOSTON — OpenAI has announced a series of parental controls for its AI chatbot ChatGPT, which includes notifying parents when their child is distressed.

It comes after a lawsuit was filed against the company and its CEO Sam Altman by the parents of 16-year-old Adam Raine, who committed suicide in April.

The parents alleged that ChatGPT created a psychological dependency in Adam, coaching him to plan and take his own life earlier this year and even wrote a suicide note for him.

OpenAI says new parental controls that will allow adults to manage which features their children can use on the service will be made available within the next month.

OpenAI’s controls will let parents link their account with their children’s and allow them to manage which features their child can access. This also includes the chat history and the memory, the user facts that the AI automatically retains.

The OpenAI blog also said ChatGPT will send parents notifications if it detects “their teen is in a moment of acute distress”.

However, the company did not specify what may trigger such an alert but said the feature will be guided by experts.

But some say the measures do not go far enough.

Jay Edelson, the attorney of Raine’s parents, described the OpenAI announcement as “vague promises to do better” and “nothing more than OpenAI’s crisis management team trying to change the subject”.

Altman "should either unequivocally say that he believes ChatGPT is safe or immediately pull it from the market,” Edelson said on Tuesday.

Meta, the parent company of Instagram, Facebook and WhatsApp, also said on Tuesday it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts.

A study published last week in the medical journal Psychiatric Services found inconsistencies in how three popular artificial intelligence chatbots responded to queries about suicide.

The study by researchers at the RAND Corporation found a need for “further refinement” in ChatGPT, Google’s Gemini and Anthropic’s Claude. The researchers did not study Meta's chatbots.

The study's lead author, Ryan McBain, said Tuesday that "it’s encouraging to see OpenAI and Meta introducing features like parental controls and routing sensitive conversations to more capable models, but these are incremental steps”.

“Without independent safety benchmarks, clinical testing, and enforceable standards, we’re still relying on companies to self-regulate in a space where the risks for teenagers are uniquely high,” said McBain, a senior policy researcher at RAND and assistant professor at Harvard University’s medical school. — Euronews


September 03, 2025
25 views
HIGHLIGHTS
TECHNOLOGY
15 hours ago

Google will not be forced to sell off Chrome or Android, judge rules in landmark antitrust ruling

TECHNOLOGY
19 days ago

Data centers to be expanded across UK as concerns mount

TECHNOLOGY
22 days ago

Elon Musk threatens to sue Apple for not ranking X or Grok among its top apps