Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

“White Genocide” answers show that the gene is fake “optional”

Muhammad Selim Kortat | Anatoly | Gets the image

For two years since generative artificial intelligence, the world stormed after public release ChatTrust was an eternal problem.

Hallucination.

Elon Musk The chatbot created by his startup XAI showed that this week there is a deeper reason: II can easily manipulate people.

GROK on Wednesday undertake Responding to user requests with false claims to “white genocide” in South Africa. By the end of the day, screenshots were placed in X similar answers, even if the questions had nothing to do with this topic.

After silence on this issue for more than 24 hours, XAI said at the end of Thursday that throat Amazing behavior was caused by an “unauthorized modification” in the so -called Chat app systematic clues that help report how it behaves and interacts with users. In other words, people dictate the reaction of the II.

The nature of the reaction in this case is related directly to Moscow, which was born and raised in South Africa. Musk who owns XAI in addition to its roles CEO Tesla and SpaceX was promotion of a false requirement that violence against some South African farmers is a “white genocide”, the mood that President Donald Trump He also expressed.

More about CNBC report on AI

“I think it is incredibly important from the content and the one who heads this company, and the ways it implies or shed light on the power that these tools should form thinking and understanding of the world,” said Deir Miligan, Professor of the California University in Berkeley and the II management expert.

Miligan characterized Groko’s mistake as an “algorithmic breakdown”, which “tears at the seams” the intended neutral nature of large language models. She said there was no reason to see Groko’s malfunction as an “exception”.

Chatbates that work on AI created Meta. Google And Openai is not “packaging” of information neutral, but instead transmit data through “a set of filters and values ​​that are built into the system,” Maligan said. Groko’s disassembly offers a window in how easy any of these systems can be changed to meet the agenda of the person or group.

Representatives XAI, Google and Openai did not respond to comment requests. Meta abandoned the comment.

Are different from past problems

Non -personal change of Groko, – said xai in his statementviolated “domestic politics and basic values”. The company said it would take measures to prevent similar disasters and publish the clues of the application system to “strengthen the trust in Grok as the truth.”

This is not the first bubble that went on the Internet. Ten years ago, the Google Photo app incorrectly marked African Americans as gorillas. Last year Google temporarily stop His Gemini AI generation function after admitting that he offered “inaccuracies” in historical pictures. And the Dall-E Openai Image Generator has accused some users of demonstrating a prejudice in 2022, which led the company to announce The fact that it sells new technique, so the images “accurately reflect the diversity of the world”.

In 2023, 58% of AI decisions in companies in Australia, UK and the United States expressed concern about the risk of hallucinations in the generative deployment of II, Foreste. The poll included 258 respondents in September of the same year.

Ambition Musk with Grok 3 politically and financially governed, the expert says

Experts said CNBC that Groko’s incident resembles Chinese depth that has become Feeling overnight In the United States earlier this year, from the quality of the new model and what it was reportedly built for the share of US competitors.

Critics said Deepseek Topics of censorship considered sensitive to the Chinese government. Like China with Deepseek, Musk appears to affect the results based on its political views, they say.

If xai debuted In November 2023, Musk stated that it had to have a “slightly wit”, “rebel series” and answer “sharp questions” that competitors could shy away. In February xai accused The engineer for the change that suppressed Groko’s answers to the user’s questions about misinformation, depriving Musk and Trump’s names from the answers.

But the recent Grok’s recent obsession in “white genocide” in South Africa is more extreme.

Petar Tsankov, CEO of AI Model Audition Firm Masticeflow Ai, said the Groko blast is more amazing than what we saw from Deepseek because “there would be some manipulation from China.”

Tsankov, whose company is in Switzerland, said the industry requires greater transparency so that users can better understand how companies build and train their models and how it affects behavior. He noted the EU’s efforts to require more technology companies to ensure transparency within the broader rules in the region.

Without a public resume, “we will never deploy more secure models,” said the taps, and it is “people who will pay the price” for trust in the companies that develop them.

Mike Gualtier, Forrester analyst, said Grok Depacle is likely not to slow users’ growth for chat -boots, or reduces the investments that companies pour into this technology. He said users have a certain level of acceptance for such types.

“Let it be a chat, chat or twins – everyone is waiting for it,” Gualtier said. “They were told how the models are hallucinated. It is expected to happen.”

Olivia Gambelin, AI ECI and the author of the book “Responsible AI”, published last year, said that while this type of Grok’s activity may not surprise, he emphasizes the fundamental shortage of AI models.

Gambelin said “showing that it is possible, at least with Grok models to adjust these major models of general purpose.”

– Laura Kolody and El Salvador Rodriguez contributed to this report

Watch: XAI CHATBOT GROK ELON MUSK CAUSE South African statements white genocide “.

Xai Chatbot Grok Elon Musk causes South African statements about white genocide

Source link