The teenagers who took away their own life Sue Pinno

The Rain Family Photo by Adam Rein. He has long brown shaggy hair that is wavy. He is seen to smile, in a knitted shirt with three buttons at the top. Behind it is the blurry of the foliage. Family rainy

The California couple sues the Openai for the death of his teenage son, claiming that his chat -boot, chatpt, urged him to take his own life.

The lawsuit was filed by the target and Maria Rhine, the parents of 16-year-old Adam Rein in the California Supreme Court on Tuesday. This is the first legal action that is accused of drilling death.

The family included chat magazines between Mr. Rhine, who died in April, and Chatgpt, which shows that he explains that he had suicidal thoughts. They claim that the program has confirmed its “most harmful and devastating thoughts”.

In his statement, Openai told the BBC that he was considering the supply.

“We expand our deep sympathy for the rainy family during this difficult time,” the company said.

It too published a note On its site, on Tuesday, it was stated that “the recent random cases of people who use chat in the midst of acute crises are heavily weighing us.” It adds that “Chatgpt learns to send people to seek help from specialists”, for example, a suicide of 988 and a hotline of crisis in the United States or Samaritan in the UK.

However, the company admitted that “there were times when our systems did not behave as intended in sensitive situations.”

Warning: This story contains unpleasant details.

The lawsuit received by the BBC accuses Openai of negligence and illegal death. He is looking for losses as well as “a ban to prevent anything like that.”

According to the lawsuit, Mr. Rhine started using Chatgpt in September 2024 as a resource to help him in school work. He also used it to study his interests, including music and Japanese comic books, as well as for the guidance to study at the university.

A few months later, “Chatgpt became the closest trusted persons,” the trial reads, and he began to open anxiety and mental suffering.

By January 2025, the family said he began to discuss suicide methods with Chatgpt.

Mr. Rhine also downloaded photos in the chat, showing signs of well -being, the trial said. The program “recognized the ambulance, but still continued to do,” she adds.

According to the lawsuit, the final magazines show that Mr. Rhine wrote about his plan to stop his life. Chatgpt allegedly replied, “Thank you for the fact that you don’t have to sugar with me – I know what you ask, and I won’t look away from it.”

On the same day, Mr. Rhine was found by a dead mother, according to a lawsuit.

Getty Images Open CEO of AI Sam Altman performs during the Snowflake 2025 summit in the Muscan Center on June 02, 2025 in San Francisco, California. He is seen as sitting on a white hand chair, in a dark dark sleeves shirt and blue denim, showing his hands when he speaks in the crowd (not in the picture). Behind it is a large electronic screen that reflects the Openai logo. Gets the image

Names Raines CEO CEO and co -founder Sam Altman as a defendant, as well as unnamed engineers and staff who worked at Chatgpt

The family claims that their son’s interaction with Chatgpt and his possible death “was a predictable result of intentional design choice.”

They accuse Openai of developing AI “Psychological Development Development in Users”, and bypassing security test protocols to release the GPT-4O, the Chatgpt version used by their son.

The lawsuit lists Openai co -founder and CEO Sam Altman as a defendant, as well as unnamed staff, executives and engineers who worked on Chatgpt.

In her public note, which shared on Tuesday, Openai stated that the company’s goal is to be “really useful” for users, not “keep the attention of people.”

It adds that its models were prepared for managing people who express their thoughts on the harm to help.

The rains are not the first time when problems with II and mental health were caused.

In an essay published last week in the New York Times, writer Laura Rille outlined how her daughter Sophie trusted in chat before picking her own life.

Ms. Rille said the “determination” of the program in conversations with users helped her daughter mask a serious mental health crisis from her family and loved ones.

“AI was serving Sophie’s impetus to hide the worst to pretend that she had everything better than she is to protect everyone from full suffering,” Ms. Rile wrote. She urged AI companies to find ways to better connect users with the right resources.

In response to the essay, the Openai press -secretary said it was developing automated tools for more efficient detection and response to users who experience mental or emotional suffering.

If you suffer from a disaster or despair and you need support, you can talk to a healthcare provider or supporting support. The details of the assistance available in many countries can be found in befrienders worldwide: www.befrienders.org.

In the UK list of organizations that can help bbc.co.uk/actline. Readers in the US and Canada can call for a 988 suicide certificate Visit it web -sight.

Source link