Openai plans to change chat after suicide, lawsuit

Openai CEO Sam Altman appears during an integrated review of the Federal Reserve Conference “Big Banks” in Washington, Colombia District, July 22, 2025.

Ken Seden | Reuters

Openai It tells about its plans to resolve the shortcomings of Chatgpt when processing “sensitive situations”
After the trial of the family who accused Chatbot of the death of his teenage son.

“We will continue to improve, go to specialists and justify the responsibility to the people who use our tools – and hope that others will join us by helping to make sure Message in the blog called “Helping people when they need it the most.”

Earlier on Tuesday, Adam Rhine’s parents were responsible for the product and the illegal death against Openai after their son died of suicide at the age of 16, Reported about NBC News. In the trial, the family stated that “Chatpt was actively helping Adam to study suicide methods.”

The company did not mention the rainy and lawsuit in its blog.

Openai said that despite the fact that Chatgpt is learning to send people to seek help when expressing suicidal intentions, chat usually offers answers that go against the company’s guarantees after many messages over a long period of time.

The company said it also works on the update to the released GPT-5 model Previously this month This will chat up, and what he studies as “connect people to certified therapists before they are in acute crisis”, including the creation of a network of licensed specialists to which users could reach directly through the chat.

In addition, Openai said he was looking for how to connect users with “closest to them” as friends and family members.

When it comes to teenagers, Openai said he would soon introduce the control elements that will give parents options to get a deeper understanding of how their children use chat.

Jay Edelson, a leading lawyer of the Rain family, said on Tuesday CNBC that no Openai had addressed the family directly to offer condolences or discuss any efforts to increase the company’s safety.

“If you are going to use the most powerful consumer technology on the planet – you have to believe that the founders have a moral compass,” Edelson said. “This is a question for Openai now, how can anyone trust them?”

Rhine’s story is not isolated.

Writer Laura Rille published earlier this month essay The New York Times talks in detail about how her 29-year-old daughter died by suicide after a broad discussion of the idea with Chatgpt. And in the case of Florida, 14-year-old Setzer III died last year after discussing it with AI Chatbot on the application.

As the AI ​​services grow in popularity, there are many problems that arise around their use for therapy, companionship and other emotional needs.

But the regulation of the industry can also be difficult.

On Monday, coalition of companies AI, venture capitalists and executives, including Openai President and Co -founder Greg Brockman announced Leading in the future, a political operation that “will resist politics that is stifling innovation” when it comes to the II.

If you have suicidal thoughts or get into trouble, contact Suicide and crisis life In 988 to support and assist from the prepared advisor.

See: Openai says the submission of the thus corresponds to its permanent persecution model

Openai says the submission of the thus corresponds to its permanent persecution model

Source link