Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Sam Altman, CEO of Openai, and Lisa SU, CEO of Advanced Micro Devices, testify during the hearings of the Commerce, Science and Transport Committee in Senate, entitled “Wining the AI Race: Strengthening the US Opportunities and Innovations” in Hart -Buddhin on Thursday, May 8.
Tom Williams | CQ-Roll Call, Inc. | Gets the image
In the broad interview Last week, Openai CEO Sam Altman addressed many moral and ethical issues concerning his company and the popular Chatgpt AI model.
“Listen, I’m not sleeping so well at night. There are a lot of things I feel much weight, but probably nothing more than the fact that every day hundreds of millions of people talk to our model,” said Altman-leading Fox News Tucker in almost an hour interview.
“I am not really worried that we are wrong with great moral decisions,” Altman said, although he confessed, “Maybe we are mistaken.”
Rather, he said he loses the maximum sleep over “very small solutions” by the model behavior that may eventually have great consequences.
These solutions are usually focused on ethics reported by Chatgpt and what Chatbot questions do rather than answers. Here is a sketch of some moral and ethical dilemma, which seem to support Altman at night.
According to Altman, the most difficult problem with which the company is fighting recently is how the chat is approaching Family who accused the chat of suicide her teenage son.
CEO said that of thousands of people who commit suicide every week, many could be Talking to Chatgpt in the lead.
“They probably talked about (suicide), and we probably won’t save their lives,” Altman honestly. “Perhaps we could say something better. Perhaps we could be more active. Perhaps we could give a little better advice on, hey, you need to get this help.”
Last month Adam Rhine’s parents fell Product responsibility and illegal death Against Openai after their son died of suicide at the age of 16. In the trial, the family stated that “Chatpt was actively helping Adam to study suicide methods.”
Soon after Message in the blog called “Help People, if they need it”, Openai Detailed plans to resolve Chatgpt deficiencies when processing “sensitive situations”, and stated that it would continue to improve its technologies to protect people who are in their most vulnerable.
Another great topic caused in the interview was the ethics and morals reported by ChatGPT and its leaders.
While Altman described the basic model of Chatpto as a collective experience, knowledge and study of humanity, he said that Openai should then align certain behavior of the chatbate and decide what questions the questions will not answer.
“This is a really difficult problem. Now we have a lot of users, and they come from a variety of life prospects … But in general, I was pleasantly surprised by the ability of the model to learn and apply a moral basis.”
Clicking on how certain specification of the model, Altman said that the company consulted with “hundreds of moral philosophers and people who thought about the ethics of technology and systems.”
An example that he gave the model made is that Chatpt avoid answering questions on how to make biological weapons when users are offered.
“There are obvious examples of where the society has an interest in considerable tension with the freedom of users,” said Altman, although he added that the company “will not get everything right and need the contribution of the world” to help make these decisions.
Another great topic of discussion was the concept of users’ privacy, and Carlson claimed that generative II could be used for “totalitarian control”.
In response, Altman said that in Washington, which he pushed, it is a “AI privilege”, which refers to the idea that everything that the user says should be completely confidential.
“If you are talking to your doctor about your health or lawyer about your legal issues, the government cannot get this information? … I think we must have the same concept for the II.”
According to Altman, this would allow users to consult AI Chatbots about their medical history and legal problems, among other things. Currently, US officials can summon the company according to users, he added.
“I think I feel optimistic that we can make the government understand the importance of this,” he said.
Asked by Carlson whether the chat will be the military to harm the people, Altman did not give a direct answer.
“I do not know how today people in a military chat use the chat … But I suspect that a lot of people talk to the chat for advice.”
He later added that he was not sure that “exactly how it treats.”
Openai was one of the companies AI which Received a $ 200 million contract From the US Department of Defense to create a generative II for work for US military. Firm said in a blog message What it will provide the US government’s access to CEO AI models for national security, support and road card information.
In his interview, Carlson predicted that in his current trajectory, generative II and expansion, Sam Altman, could gain more power than any other person, going so far as to call the “religion” chat.
In response, Altman said he was worried about the concentration of power that could arise as a result of the generative II, but now he believes that AI will lead to the “huge alignment” of all people.
“What is happening now is a lot of people who use chatpto and other chat, and they are all more capable. They are doing more. They can all achieve more, start a new business, come up with new knowledge, and it feels pretty good.”
However, the CEO said he believes that AI eliminates many jobs that exist today, especially in the short term.