The biggest misconception in the debate about artificial intelligence (AI) is the assumption that it is all about innovation, productivity and competitiveness. This is also an important part and AI offers huge opportunities – to make this clear in advance. But it is also, and primarily, about power: Who will determine the central resources of modern societies in the future? This is one of the greatest dangers for our liberal democracy.
It is not AI itself that threatens democracy, but the political economy of its development – and the speed at which it penetrates warfare, administration, the world of work, financial markets and communication. If AI resources are concentrated among a few private actors, then power shifts from democratically controlled spaces into opaque spheres. Whoever controls AI controls markets, information flows and the state’s ability to act.
Some tech billionaires are openly questioning democracy and influencing politics. So says Peter Thiel, whose ideology I analyzed in a previous column “I no longer believe that freedom and democracy are compatible.”
State institutions are losing control
Regulating AI alone will no longer be enough in the future because private companies are often technologically far ahead of state authorities. These were almost always new technologies, but the speed and dominance of AI create an irreconcilable imbalance between the state and private AI owners. This is exactly where the democratic risk lies: If parliaments, courts and supervisory authorities can no longer understand, examine or effectively limit the systems they are supposed to control, the balance of power between the state and the economy will tip.
Several areas show how real this danger already is today: In the military sector there were 2024 reportsreviewed by the US, that Israel used AI to identify bomb targets in Gaza. Recently there have been reports of AI-accelerated war planning in the Iran War; At the same time, after the attack on a school in Minab, an investigation is being carried out into how target verification and data checking failed there.
The danger is also visible in the civilian state. The Dutch SyRI system for detecting social fraud was established stopped by courtbecause it violated fundamental rights. And in Great Britain, an official algorithm wrongly flagged hundreds of thousands of benefit recipients as suspicious. If citizens do not know how such systems work and how they can defend themselves, constitutional administration becomes a black box.
Growing inequality and concentration of power
The second danger is a drastic increase in inequality. Technological progress does not automatically create prosperity for everyone. It can increase productivity while redistributing income, wealth and decision-making power upwards. Nobel Prize winners Daron Acemoglu and Simon Johnson show in their book Power and Progressthat new technologies do not by themselves distribute wealth more fairly; Without rules, they often strengthen those who are already powerful first.
Here too, the development is already concrete. One Study by the National Bureau of Economic Research (PDF) shows that companies that are more exposed to AI are more likely to reduce their positions in non-AI-related areas. This does not mean that mass unemployment can already be observed across the board today; but it means that the repression begins asymmetrically. At the same time, a study by the European Parliament estimates that by 2030 more than half of employees in… Europe could be subject to algorithmic management.
Added to this is the concentration at the top. The OECD warns: Data, computing power and key positions create significant barriers to entry in AI value creation. Whoever controls the chips, clouds and base models also controls which companies can compete. Studies on AI-assisted recruitment also show that such systems can reproduce or re-encode existing discrimination.