Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

[ad_1]
Jaap Arjens | Nurphoto | Getty Images
An artificial intelligence feature on iPhones creates fake news alerts, raising concerns about the technology’s ability to spread misinformation.
Last week the function that recently launched an apple which aggregates user notifications using artificial intelligence, pushed inaccurately aggregated notifications on the BBC News program for the broadcaster’s story about the PDC World Darts Championship semi-final, falsely claiming that British darts player Luke Littler had won the championship.
The incident happened a day before the actual finals of the tournament, which Littler actually won.
Then, hours after the incident occurred, a separate message created by Apple Intelligence, the tech giant’s artificial intelligence system, falsely claimed that tennis legend Rafael Nadal had come out as gay.
The BBC has been trying to get Apple to fix the problem for about a month. British public broadcaster complained to Apple in December after its AI function created a false headline suggesting that Luigi Mangione, a person was detained after the murder UnitedHealthcare CEO Brian Thompson shot himself in New York – something that never happened.
Apple was not immediately available for comment when contacted by CNBC. On Monday, Apple told the BBC it was working on an update to fix the problem, adding a clarification that shows when Apple Intelligence is responsible for the text displayed in notifications. Currently generated news alerts come directly from the source.
“Apple Intelligence features are in beta and we’re constantly improving them with user feedback,” the company said in a statement released by the BBC. Apple added that it encourages users to report concerns if they see an “unexpected notification summary.”
The BBC is not the only news organization affected by Apple Intelligence’s inaccurate summarization of news alerts. In November, this feature sent a generalized AI notification falsely claiming that Israeli Prime Minister Benjamin Netanyahu had been arrested.
Ken Schwenke, a senior editor at investigative journalism site ProPublica, pointed out the bug in the social media app Bluesky.
CNBC reached out to the BBC and The New York Times for comment on Apple’s proposed solution to the misinformation problem surrounding the AI ​​feature.
Apple advertises its own AI generated notification summaries as an efficient way to group and rewrite news program notification previews into a single alert on the user’s lock screen.
This is an Apple feature says designed to help users sift through their notifications for key details and reduce the overwhelming barrage of updates that many smartphone users are familiar with.
However, this has led to what AI experts call “hallucinations” — AI-generated responses that contain false or misleading information.
“I suspect that Apple will not be alone in its problems with AI-generated content. We’ve already seen many examples of AI services that confidently tell lies, so-called “hallucinations,” said Ben Wood, chief technology market analyst. research firm CCS Insights reported to CNBC.
In Apple’s case, because the AI ​​tries to combine notifications and condense them to show only a basic summary of information, it combines words in a way that doesn’t accurately characterize events, but confidently presents them as facts.
“Apple had the added complexity of trying to condense content into very short summaries, which ended up delivering the wrong message,” Wood added. “Apple will no doubt be looking to resolve this issue as quickly as possible, and I’m sure competitors will be watching closely to see how it responds.”
Generative AI works by trying to figure out the best possible answer to a question or hint inserted by the user, drawing on vast amounts of data on which to train its underlying large language models.
Sometimes the AI ​​may not know the answer. But because it has been programmed to always provide an answer to the user’s prompts, this can lead to cases where the AI ​​actually lies.
It’s unclear when exactly Apple will fix the bug in the notification summary feature. The iPhone maker expects one to arrive in the “coming weeks.”
[ad_2]
Source link