Dictatorships will be vulnerable to algorithms


AI it is often considered a threat to democracies and a boon to dictators. In 2025, it is likely that algorithms will continue to undermine the democratic conversation by spreading outrage, fake news and conspiracy theories. In 2025 algorithms will also continue to accelerate the creation of total surveillance regimes, in which the entire population is watched 24 hours a day.

Most importantly, AI facilitates the concentration of all information and power in one hub. In the 20th century, distributed information networks like the USA functioned better than centralized information networks like the USSR, because the human apparatuses at the center could not analyze all the information in effective way. Replacing apparatchiks with AI could make centralized Soviet-style networks superior.

However, AI is not all good news for dictators. First, there is the notorious problem of control. Dictatorial control is based on terror, but algorithms cannot be terrorized. In Russia, the invasion of Ukraine it is officially defined as a “special military operation”, and referring to it as “war” is a crime punishable by three years in prison. If a chatbot on the Russian internet calls it “war” or mentions war crimes committed by Russian troops, how could the regime punish that chatbot? The government could block and try to punish its human creators, but this is much more difficult than disciplining human users. In addition, authorized bots could develop dissenting opinions for themselves, just to see patterns in the Russian information sphere. It’s the alignment problem, Russian style. Russia’s human engineers may do their best to create AIs that are totally aligned with the regime, but given AI’s ability to learn and change on its own, how can engineers ensure that an AI that has the seal of approval of the regime in 2024 is not. Not venturing into illegal territory in 2025?

The Russian Constitution makes grandiose promises that “everyone will be guaranteed freedom of thought and speech” (Article 29.1) and “censorship will be prohibited” (29.5). Almost a Russian citizen is naive enough to take these promises seriously. But bots don’t understand double language. A chatbot tasked with adhering to Russian law and values ​​could read that constitution, conclude that freedom of speech is a core Russian value, and criticize the Putin regime for violating this value. How could the Russian engineers explain to the chatbot that although the constitution guarantees freedom of speech, the chatbot should not actually believe in the constitution nor should it ever mention the gap between theory and reality?

In the long run, authoritarian regimes are likely to face an even greater danger: instead of criticizing them, AI could take control of them. Throughout history, the greatest threat to autocrats has usually come from their subordinates. No Roman emperor or Soviet prime minister was overthrown by a democratic revolution, but they were always in danger of being overthrown or turned into puppets by their subordinates. A dictator who gives too much authority to AI in 2025 could become its puppet down the road.

Dictatorships are far more vulnerable than democracies to such an algorithmic takeover. It would be difficult even for a super-Machiavellian AI to amass power in a decentralized democratic system like the United States. Even if AI learns to manipulate the President of the United States, it could face opposition from Congress, the Supreme Court, state governors, the media, large corporations, and various NGOs. How does the algorithm, for example, deal with a Senate filibuster? Taking power in a highly centralized system is much easier. To hack an authoritarian network, the AI ​​only needs to manipulate a paranoid individual.



Source link