...

Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Deepfakes cheap 2024 elections because they are not very good, the research finds


It seems that even if the internet is more and more drowning in false imagesWe can at least take some stock in humanity’s ability to sniff out BS when it matters. A lot of recent research suggests that disinformation generated by AI has not had any material impact in this year’s elections around the globe because it is not very good.

There has been much concern over the years that increasingly realistic but synthetic content could manipulate audiences in harmful ways. The rise of generative AI has raised those fears again, as the technology makes it much easier for anyone to produce fake visual and audio media that appear to be real. Back in August, a political consultant used AI down the voice of President Biden for a robocall telling voters in New Hampshire to stay home during the state’s Democratic primary.

Tools like ElevenLabs make it possible to send a short sound of someone speaking and then duplicate their voice to say what the user wants. Although many commercial AI tools include guardrails to prevent this use, open-source models are available.

Despite these advances, the Financial Times in a new story he looked at the year and found that, in the world, very little synthetic political content has gone viral.

It is mentioned a report from the Alan Turing Institute which found that only 27 pieces of AI-generated content went viral during the European elections this summer. The report concluded that there was no evidence that the election was affected by AI disinformation because “most exposure was concentrated among a minority of users with political beliefs already aligned with the ideological narratives embedded in such content”. In other words, among the few who saw the content (before it was allegedly reported) and were willing to believe it, it reinforced beliefs about a candidate even though those exposed to it knew that the content itself was generated by AI. He cited an example of AI-generated images showing Kamala Harris addressing a rally in front of Soviet flags.

In the United States, the News Literacy Project identified more than 1000 examples of misinformation about the presidential election, but only 6% was done with AI. On X, mentions of “deepfake” or “generated by AI” in Community Notes were typically mentioned only with the release of new image generation models, not in election time.

Interestingly, it seems that users on social media were more likely to misidentify true images as generated by AI than the other way around, but in general, users show a good dose of skepticism. And fake media can still be debunked through official communication channels, or through other means like Google’s reverse image search.

If the results are accurate, it would make a lot of sense. AI images are everywhere these days, but AI-generated images still have a disparaging quality to them, exhibiting telltale signs of being fake. An arm could be unusually long, or a face is not reflected well on a mirrored surface; There are several small clues that will indicate that an image is synthetic. Photoshop can be used to create far more convincing fakes, but doing so requires skill.

Proponents of AI do not necessarily welcome this news. It means that the images generated still have a way to go. Someone who checked OpenAI’s Sora model knows that the video it produces is not very good – it almost looks like something created by a video game graphics engine (the speculation is that he was trained on video games), one that clearly does not understand properties like physics.

That said, there are still concerns to be had. The Alan Turing Institute Report done after all he concludes that beliefs can be reinforced by a realistic deepfake that contains misinformation even if the audience knows that the media is not real; confusion around whether a piece of media is true damages trust in online sources; and AI images are already in use targets female politicians with pornographic deepfakeswhich can be psychologically damaging and to their professional reputation as it reinforces sexist beliefs.

The technology will definitely continue to improve, so it’s something to keep an eye on.



Source link

Seraphinite AcceleratorOptimized by Seraphinite Accelerator
Turns on site high speed to be attractive for people and search engines.