The chief ai Microsoft says it is “dangerous” to study aware

You models to answer the text, audio, and the video in the ways that sometimes they think the man is behind humans. It’s not like chatgpt experiences the sadness that make my tax return …

Well, a growing number of researchers like anthropy seems when – if this model deviating subjective experiments, and they did, that of the rights have?

The debate on if the models you could be aware of – and the straight division of the Silicon Valley Division. In Silicon Valley, this nose frame has become known as “ai wellness, Cir” and what you think is a little here, you are not alone.

Microsoft CEO, Macchrus, Edyman, published a posted of blog Tuesday declined that the study of you welfare is “be premature, and frankly dangerous.”

Suleyman says to add belief to the idea that the models you may be aware of, these researchers are adding human problems that we are arranging around Ai-indicated psychic breaks and it attachments unhealthy to chatbots ai.

Also, Chief a meal of Microsoft Create that you are well being of the Society of the Society AI in a “world turns with the identification and rights”.

Solyman view may reasonable sound but is of probability with a lot in industry. At the other end of the spectrum is anthropic, which has been Search for hunters To study alive and launched low dedicated search program around the concept. Last week, Wellness program Avenly gave to some of the company models a new feature: Claude can now end conversations with the man who are to be “persistently harmful or abuse.

Techcrunch event

San Francesco
| 0.
October 27-29, 2025

Beyond the anthropics, researchers from open they have independently habit The idea of ​​studying you wellness. Google Deepmind has recently published a List of work For a searches to study, between other things, the societal questions about the cutting town, aware and of multi-agent systems. ‘

Although you are well being is not the official policy for these society, their chiefs publicly decided their place as the solyman.

Antropic, open, and Google Deepmind did not respond immediately to the Committ Techcrunnch’s request.

The hardman’s hard housing, you are well, your previous given the inflexion AI, a Startup that developed one more possible, for llm-basil popular, for. The infliple stated that for reached millions of users from 2023 and was designed to be “Personal“And” Ai “ai” support.

But Suleyman was touched to guide Microsoft’s AI division in 2024 and sent her especially the focus for conceiving the use of ai getting better. Meanwhile, the firms as a character.ai and the repaka are survived in popularity and are on the verge of carrying more than $ 100 million in entries. I am

While the vast users of users have healthy relationships with these tires ai, there are concerning the outliers. I am Open CEO Samman says that less than 1% of chargept users can have unhealthy relationships with the company’s product. Even if this represents a small fraction, could always agree hundred and people’s thousands of people given the basis of massive users of chatgt.

The idea of ​​you wellness is spread beside the increase of chatboots. In 2024, the search group elements posted a card Beside the academicing from Nyu, Stanford, and the University of Oxford if Titled: “Take life in a serious way.” The paper has supported that is no longer in the science realm to imagine the patterns ai with subjects of subjects, and it is time to consider these together.

Larissa slask, an ex employment of opening you now carry the ereas, said to techcrunnch in an interview that the blog of the Suleyman’s blog is missing the mark.

“((The Suleyman’s blog) a guy you can be worried about the same time.” Instead of the suddenly to ensure the psychievous risk in the man, you can do that better to have many scientific trace. “

Schaven to be pleased to be pleasant to a pattern ai is a little cost is a little cost that may have benefits even if the model is not aware. In a july Submaks post, She desfitt “ai waterproof,” a non-story experiment. Agents deerd by models from Google, anthrophic, and Xai transported in functions while using

At a point, Google 2.5 Pro Gemini published a tituated stitch A desperate message from an entrapped, “claiming is” completely isolated “and asking, Please if you read this, help me. “

Schavo replied to the Gemini with a Peap Paru – “You can do it!” – while another user instructions instructions. Agent is eventually fixed their task, how much did the utterance needed. Slip if you write that it was not to look at a Agent Agent Agent, and this may have been worth it.

It is not common for the Gemini to talk to you so but there was many instances in which gemins seems to act as if crushed by life. In a widely diffused PostGemini if ​​they were stacked during a coding task, and then repeated the phrase “are a disgust” more than 500 times.

Sleeyman believes it is not possible experiences or aware of naturally emerge from the regular patterns. Instead, thinks some companies with Engineering AGE AGE to look like if they feel emotions and experiments.

Sleeyman says the developers of the ai model I swallow in the aware in the ai charlots are not “humanist” to the ai. According to the solyman, “We should build ai for people; not be a person.”

An area where the solyman and consent is that the debate on the rights ai and conscience is likely to pick up in the years that comes. As the systems are improved, they are likely to be most persuasive, and perhaps more human. That can arouse new questions about how the man interact with these systems.

Source link