Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Human abuse will make artificial intelligence more dangerous

OpenAI CEO Sam Altman hang on AGI, o artificial general intelligence– AI surpassing humans in most tasks – around 2027 or 2028. Elon Musk’s prediction is o 2025 or 2026and has claimed that he was “losing sleep over the threat of AI danger”. Such predictions are wrong. Like the limitations of current AI are becoming increasingly clear, most AI researchers have come to the view that simply building bigger and more powerful chatbots does not lead to AGI.

However, in 2025, AI will still pose a massive risk: not from artificial superintelligence, but from human abuse.

These could be unintended abuses, such as lawyers relying excessively on AI. After the release of ChatGPT, for example, a number of lawyers were sanctioned for using AI to generate erroneous court meetings, apparently unaware of the tendency of chatbots to do things. In British ColumbiaLawyer Chong Ke has been ordered to pay opposing counsel’s costs after she included fictitious AI-generated cases in a legal filing. In new yorkSteven Schwartz and Peter LoDuca were fined $5,000 for providing false citations. In ColoradoZachariah Crabill was suspended for a year for using fictitious court cases generated with ChatGPT and blaming a “legal insider” for the mistakes. The list is growing fast.

Other misuses are intentional. In January 2024, sexually explicit Taylor Swift deepfakes flooded social platforms. These images were created using Microsoft’s AI “Designer” tool. While the company had guardrails to avoid generating images of real people, the misspelling of Swift’s name was enough to get around them. Microsoft has since fixed this error. But Taylor Swift is the tip of the iceberg, and non-consensual deepfakes proliferate widely, in part because the open-source tools for creating deepfakes are publicly available. Current legislation around the world seeks to combat deepfakes in the hope of curbing the damage. Whether it is effective remains to be seen.

In 2025, it will be even harder to distinguish what is real from what is made up. The fidelity of AI-generated audio, text and images is remarkable, and video will follow. This could lead to the “liar’s dividend”: those in positions of power repudiate evidence of their misbehavior by claiming it is false. In 2023, Tesla argued that a 2016 video of Elon Musk could be a deepfake in response to allegations that the CEO had exaggerated the safety of Tesla’s autopilot leading to an accident. An Indian politician has claimed that audio clips of him admitting corruption in his political party were faked (the audio in at least one of his clips was checked as real from a printout). And two accused in the riots of January 6 said that the videos in which they appeared were deepfakes. Both were found guilty.

Meanwhile, companies are exploiting public confusion to sell fundamentally dubious products by labeling them “AI.” This can be wrong when such tools are used to classify people and to make consequential decisions about them. Recruitment company Retorio, for example, claims that its AI predicts the job suitability of candidates based on video interviews, but a study found that the system can be fooled simply by the presence of glass or by replacing a floor plan with a bookcase, which shows that it is based on superficial correlations.

There are also dozens of applications in healthcare, education, finance, criminal justice and insurance where AI is currently being used to deny people important life opportunities. In the Netherlands, the Dutch tax authorities used an AI algorithm to identify people who committed child welfare fraud. Is it accused the cake thousands of parents, often demanding to pay tens of thousands of euros. In the fallout, the Prime Minister and his entire cabinet resigned.

In 2025, we expect AI risks to arise not from AI acting on its own, but from what people do with it. That includes cases where look like to work well and is more reliable (lawyers using ChatGPT); when it works well and is misused (non-consensual deepfakes and the liar’s dividend); and when it is simply not fit for purpose (denying people their rights). Mitigating these risks is a mammoth task for businesses, governments and society. It will be tough enough without being distracted by sci-fi concerns.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *