Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Anthropic is launched a new program to study the ai ‘form

Could future ais be “conscious,” and experiences the world in a human way to human way? There is no strong evidence they will, but anthropi are not rolling the possibility.

Thursday, the Lab ai announced If she started a search program to investigate – and get ready to navigate – what is “hearing”. As part of the effort, Anttropic says things as determinant if the “deserge welfare” and possible “Low-Cost”. And Possible “Low.”

There is the major disagreement in the community Ai on what Human Feature Models “if someone, and as we should” treat. “

Many accommodations believe that you are today cannot the rough consciance or human experience, and may not necessarily be in the future. You as we know that it’s a statistical predication engine. It’s not really “thinking” or “feel” as those concepts are traditionally understood. Formed on examples without dreams, images, and so, you are looking at the patterns and some useful time for extrapolate to solve the tasks.

As mike cook, a search partner at the university of the inferial college college in AI, he said uncomfortable in an interviewa pattern cannot “oppose” a change in their “values” because models do not to get one values. To suggest otherwise is projecting on the system.

“Any anthroppietification systems at this grade is to play for attention or seriously misunderstanding his / her relationship,” Cook said. “It’s an optimal system for his goals, or is” acquising their values ​​”is a thing of their values”? Is the discircle, and how flourished the language that the spice. “

Another researcher, Stephen Teeper, a doctor in mit, he told the Ai, ensure a “imitator” that “(and) all kinds of confability” and says. “

Yet other scientific insisted that ai to do have values ​​and other components as a moral decision mumanary. A Study From the center for security ai, the organization of the research, implies that ai has a system of value they drive to priority.

Anthropica was counted the land for its welfare initiative pattern for some time. Last year, the company busy Her first dedicated “ai welfare” researchers, the fish kyle, to develop orropic lines and other companies must get the problem. (Fish, which drive the new welfare search program, program, told the new york times that thinks there is a big chance claude or another ai is conscious today.)

In a blog post Thursday, the antion recognized that there is no scientific cosier on Chie Current or future systems could be aware or arrejects that ethical hope.

“In the light of it, pulling the topic with humility and with some supposed as possible, as possible, the society said. “We will recognize we will regularly review our ideas that develops the camp.

Source link