OpenAI added parental control to ChatGPT after suicide of a teenager from the USA
Artificial intelligence (AI) company OpenAI is launching parental controls for the web version and the ChatGPT mobile app as a result of a lawsuit from the parents of Adam Rain, a 16-year-old teenager who died in California and committed suicide. He did this after the AI allegedly taught him ways to harm himself. This was reported by the Reuters news agency on September 29.
"OpenAI is launching parental controls for ChatGPT in web and mobile applications following a lawsuit by the parents of a teenager who committed suicide after an artificial intelligence startup's chatbot allegedly taught him self-harm techniques," the publication says.
According to the agency, the innovations will allow parents and teenagers to choose "more reliable protection" by linking their accounts: one party will have to send an invitation, and parental control will be activated only if it is accepted by the other party.
Thanks to such measures, parents will be able to limit exposure to content, monitor ChatGPT's memory of past chats, and decide whether conversations can be used to teach OpenAI models.
It is noted that parents will also be allowed to set a "silence mode" that restricts access at certain times and turn off voice mode. However, according to OpenAI, parents will not have access to transcripts of teenagers' chats.
On August 27, NBC News reported that in the United States, Rain's parents blamed his son's death on ChatGPT from Open AI. According to the publication, the boy's family believes that the developer of this chatbot has not taken sufficient security measures, in particular for people under the age of majority.
All important news is on the Izvestia channel in the MAX messenger.
Переведено сервисом «Яндекс Переводчик»