Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

AI – all this brain and no ethics | Fox News

Join the Fox News for access to this contents

Plus special access to the choice of articles and other premium content with your account – free.

By introducing email and clicking on, you agree on Fox News’ Conditions of use and Privacy Policywhich includes ours Notice of Financial Stimulation.

Please enter a valid email address.

NewNow you can listen to Fox News articles!

Report in February 2025. Palisades Research shows that Models of AI’s Reflections lacking a moral compass. They will cheat to achieve their goals. The so -called large linguistic models (LLMS) will be incorrectly submitted to the degree to which they have been given to social norms.

Nothing should surprise. Twenty years ago, Nick Bostrom created the Thining Experiment in which Oh were asked For the most effective manufacture of paper clamps. Given the mandate and agencies, end up destroying your lifetime to produce paper clips.

Isaac Asimov saw that it was coming in his “I, robot” Stories that view as “aligned” robotic brain can still go wrong to harm the person.

You have a robot

The moral/ethical context in which AI models work is miserable. (Getti Image)

One notable example, the story of “Runaround”, puts a robot mining tool on the Mercury planet. Two people on the planet should work if they should return home. But the robot gets into demand for fulfillment of orders and the demand to keep itself. As a result, it circles around unattainable minerals, unaware that in a great picture he ignores his first team to preserve human life.

The upcoming economy is easy to work with AIS: who will pay taxes?

And the main picture is the question here. The moral/ethical context in which AI models work is miserable. In its context, it includes the written rules of the game. This does not include all unwritten rules, such as what you don’t have to manipulate your opponents. Or you don’t have to lie to protect your perceived interests.

Also, the context of the models of reasoning from the II may include countless moral judgments spreading from every decision that a person or II is made. Therefore Ethics is complicatedAnd the more difficult the situation, the more difficult they will succeed. There is no “you” and no “I”. There is just a hint, processing and answer.

So, “do others …” really does not work.

AI is engaged in the reshuffle of business. That’s the way we stay ahead of China

In people a develops a moral compass Through socialization, stay with other people. This is an imperfect process. But so far it allowed us to live in wide, diverse and very complex societies without destroying ourselves

The moral compass develops slowly. For the development of a reliable sense of ethics will take years from infancy to age. And many still barely get it and pose a constant danger to their colleagues. People needed millennia to develop morality sufficient for our ability to destroy and self -destruction. Just having the rules of the game never work. Ask Moses, either Muhammad, or Jesus, or Buddha, or Confucius, Menin or Aristotle.

Will it even be able to consider the consequences of their actions on thousands and societies in different situations? Can this take into account the complex natural environment on which we all depend? Now the best cannot distinguish being fair and cheating. And how could they? Justice cannot be reduced to the rule.

AI can’t wait: Why do we need speed to win

Perhaps you will remember the experiments that show that Capuchin monkeys rejected what seemed “uneven pay” for performing the same task? This makes them much more developed than any II when it comes to morality.

Honestly, it is difficult to understand how AI can be given such a sense of morality that is absent in socialization and prolonged evolution, for which modern models are not able to lack human training. And even then they are trained, no formed. They do not become moral, they just study more rules.

It does not make the II insignificant. It has a huge ability to do good. But it is Really makes ii dangerous. So, it is required that ethical people create the recommendations we would create for any dangerous technology. We do not need a race to anarchy AI.

Click here to get an extra judgment Fox News

I had a biting end of this comment, which is fully based on publicly reported events. But after reflections, I understood two things: first, what I use someone’s tragedy for my moment a drop of microphone; Secondly, participants can be damaged. I threw it away.

Use the pain and suffering of other people to promote their own interests. This is what people are at least most of us know. This is what AI can never understand.

Click here to get the Fox News app

Source link