Will AI Threaten or Empower Democracy?
Sebastian Wijas
Democracy, although rooted in the philosophical and political tradition of European civilisation, is the subject of ongoing debate about its framework for functioning in new social contexts. At its core lies the idea of the common good, debate and active citizen participation in public affairs. Meanwhile, the contemporary technological revolution – especially the dynamic development of artificial intelligence – challenges these foundations. Artificial intelligence not only changes the ways in which information is managed and decisions are made, but also transforms the very public space in which democratic life takes place. The aim of this paper is to analyse whether the phenomenon of artificial intelligence will strengthen or weaken democracy. The hypothesis is that there are both elements that strengthen and weaken democracy, but at the centre of the discussion is the emerging threat of the depoliticisation of the public sphere – a process in which the technological influence on the public-private sphere is replacing the confrontation of different visions of the common good. This leads to a weakening of the role of the citizen as a political entity and a blurring of the responsibility of the authorities, which hides behind the guise of the ‘objective’ determinations of technical systems.
The most popular form of government is democracy, a system based on the belief that people, as a collective of free citizens, should decide on the political direction of their community. Initially, Plato, reflecting on the nature of good government, proposed the concept of rule by philosopher kings – those who, thanks to their knowledge of goodness and justice, are best qualified to govern. Although his vision was anti-democratic from the perspective of the modern understanding of this form of government, over time it evolved towards the idea of a state based on the rule of elites – educated, guided by reason and responsible for the political community as a whole. Contemporary thinkers, however, made a fundamental change – democracy began to be understood not as the rule of the best, but as an expression of the will of the people, in which participation, public debate and the ability to collectively resolve conflicts of values play a major role. And yet, despite the difference in assumptions, Plato's idea of the rule of wise men can be interpreted as a harbinger of modern technocracy - the belief that complex social problems are best solved by expert knowledge, data management and rational, often top-down decisions.[1],[2]
In this light, a tension arises: democracy, although based on debate and participation, increasingly competes with a model of governance in which key decisions are made in closed circles of specialists or, more recently, by technical systems such as artificial intelligence. Meanwhile, the essence of democracy is not limited to the act of voting or participating in debate – it requires the active involvement of citizens, the ability to reflect on the common good and an awareness of one's own agency. It is precisely this dimension – the communal one – that is increasingly threatened in a world where technical and algorithmic logic is beginning to supplant political logic. In Reflective Democracy, Goodin argued for a form of democratic deliberation in which people imagine themselves in the position of others[3]. In this way, they have a political imagination that is based on their individual reflection and reflection on the perspective of others in the community.
Artificial intelligence lacks imagination, because imagination is a trait of conscious beings. Currently, this technology is not only a tool, an analytical tool for studying social reality, but also a constructor of social imagination. Heidegger analysed the essence of modern technology in his essay, in which he stated (simplifying) that the greatest threat is not technology itself or its essence, but our ignorance of the fact that its essence – the way it reveals the world as a resource – draws us into a certain way of thinking and acting. When technical disclosure becomes dominant, supplanting other forms – such as art and philosophical reflection – people themselves become part of the resource, losing their subjectivity[4]. In this context, government is essentially a large administrative machine for resources, with the goal of efficiency. This is one of the greatest challenges associated with artificial intelligence, going beyond the challenges related to legal regulations.
However, artificial intelligence is a giant leap towards more efficient and faster analysis of huge data sets, which can provide real support to humans in many areas of life – from medicine and education to public administration and crisis management. The analytical knowledge gained through this can, among other things, contribute to more accurate political judgements, enabling rapid diagnosis of social problems or prediction of the consequences of specific decisions. However, as Gorwa, Binns and Katzenbach point out, knowledge alone is not enough to create policy. They draw attention to the serious risk of depoliticisation, i.e. the removal from public debate of disputes over values, interests or different visions of the common good[5]. As decisions are increasingly made based on algorithmic recommendations, the burden shifts from ordinary political debate to no debate at all. Conflicts become unnecessary because they are ‘ineffective,’ and values are ‘set aside’ in the private sphere. One of the main critics of this approach is Evgeny Morozov, who in his analyses of social technologies describes this phenomenon as solutionism – a concept that assumes that all social problems can be solved with technology, without the need to consider their political, cultural or moral context[6]. Within this logic, artificial intelligence becomes more than just a tool – it takes the form of a modern oracle whose predictions gain the status of ‘objective facts’. If political decisions are made before they even become the subject of public debate – because the data is “objective” – then the role of citizens as co-creators of policy is marginalised. The authorities avoid responsibility for choosing priorities or having to openly take sides. In this way, technology not only influences politics, but actually obscures it, creating the impression that values have already been ‘decided’ in the data.
This phenomenon poses a serious cognitive threat to democracy – it undermines the foundations of deliberation, pluralism and communal reflection on the public good. In such a system, the space for individual reflection and recognition of other people's perspectives is disrupted, and civic relations are reduced to consenting to solutions ‘proposed’ by algorithms. Similar consequences are brought about by the functioning of social media platforms, which use artificial intelligence-based algorithms to fragment the public sphere. Such an environment fosters polarisation, radicalisation and epistemic stagnation – the space for a shared political imagination capable of transcending one's own information bubbles disappears. At the same time, it is worth remembering that a new generation is growing up whose political socialisation took place – and continues to take place – mainly through digital platforms. Their understanding of democracy, civic participation and public debate is shaped not in schools or public squares, but on TikTok, Instagram and X (formerly Twitter), where the pace, style and manner of content presentation are subordinated to the logic of the platforms. All this shows that artificial intelligence – although it does not create politics itself – is radically transforming the conditions in which it operates.[7]
However, artificial intelligence also presents an opportunity to support wider access to education and knowledge, which is an important step in social and civic development. The question arises, however, whether education based solely on technology, with limited participation of people acting as guides, teachers and role models, will not reduce what is very important in upbringing: the interpersonal aspect of human formation, and this does not only apply to younger generations. We are already seeing a growing trend towards replacing human contact with artificial intelligence – bots that solve customer problems (a cheaper option). Companies are transferring more and more processes to machines, and access to real people is becoming difficult and often discouraging.[8] This raises concerns about a new social divide: the privileged have access to human support, while the rest must settle for impersonal, formulaic service from AI machines. In this model, human interaction becomes a form of ‘premium access’ – a luxury resource that not only deepens economic inequalities, but also cognitive and emotional ones, limiting the ability to participate in public life.
In the past, clergy protected the sacred, and the aristocracy inherited privileges – today, ‘algorithm engineers’ shape the perceptions of billions of people. However, while former rulers bore personal responsibility (before God or the people), the decision-making process in technocracy is spread across many positions, making it difficult to identify specific culprits. This dispersed structure of power does not mean a lack of responsibility, but rather a blurring of responsibility and a shift of responsibility beyond the traditional framework in which guilt or decision-makers can be clearly identified. As John Paul II notes, ‘structures of sin’ are systems that function in such a way that individual responsibility becomes blurred and evil or injustice becomes the result of the operation of entire mechanisms.[9] In the context of artificial intelligence, this dynamic becomes even more complex. Algorithms, although designed by humans, operate on the basis of huge data sets and autonomous processes, making it difficult to understand why a particular decision was made and who is responsible for it.[10]
This situation poses a serious threat to democracy, as the inability to assign responsibility weakens control and accountability. If we do not know who is responsible for decisions affecting our rights, freedoms or daily lives, it is difficult to effectively defend ourselves against abuses or mistakes. As a result, a new form of power – technopower – is prompting a new approach to accountability and transparency. Mechanisms are needed to track the decision-making processes of algorithms and assign responsibility for their effects, as well as to protect citizens' rights from the actions of technological systems. This is not only a technical challenge, but above all a political and ethical one. Without such an approach, democracy may be supplanted by systems that, although effective, operate beyond the reach of democratic control and reflection. In light of the above, it is necessary to understand that the concentration of knowledge and power in the hands of ‘algorithm engineers’ is not merely a technological problem, but a fundamental threat to the democratic community itself.[11]
The process of regulation in the face of technocracy continues, with the law as the purest expression of the democratic will. Innovation in itself is not a bad thing – it is a manifestation of human creativity. The primacy of democracy over technology is crucial, but the problem arises when powerful players break the law, manipulate public debate and avoid responsibility. Examples of this include the authorities of portal X, which blackmailed political leaders with withdrawal from the European market during regulatory negotiations, or the authorities of Facebook, which paid a fine for concealing information during the acquisition of WhatsApp, revealing the problem: the lack of a legal obligation to tell the truth to democratic institutions.[12],[13]
Nevertheless, the regulatory process has the potential to strengthen democracy – not only through control over technology or business models, but above all through control over the logic of innovation implementation and its democratic oversight. However, technology, especially artificial intelligence, is changing the way we perceive the world as a resource to be managed. In the spirit of Heidegger's warning about technology, if democracy is to survive, it must defend the vision of man as a free being – a subject capable of reflection and open to different ways of thinking in order to debate and achieve the common good.
Bibliography:
- Adam, M., Wessel, M., & Benlian, A. (2021). AI-based chatbots in customer service and their effects on user compliance. Electronic Markets, 31(2),
- Coeckelbergh, M. (2022). The political philosophy of AI: An introduction. Polity Press.
- Goodin, R. E. (2003). Reflective democracy. Oxford University Press.
- Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, 7(1).
- Heidegger, M. (1977). The question concerning technology and other essays. Garland Publishing.
- John Paul II. (1981). Sollicitudo rei socialis. Vatican.
- Kulesza, R. (2015). Demokracje antyczne i współczesne. In: P. Fiktus, M. Marszał, H. Malewski, & J. Koredczuk (Eds.), Rodzinna Europa - europejska myśl polityczno-prawna u progu XXI wiekuE-Wydawnictwo.
- Morozov, E. (2013). To save everything, click here: The folly of technological solutionism. Public Affairs.
- Nemitz, P. (2023, December 16). Interview: Paul Nemitz: 'Democracy must prevail over technology and business models for the common good'. Voxeurop.eu: https://voxeurop.eu/en/paul-nemitz-democracy-prevail-over-technology-business-models-common-good/
- Newsroom Brussels Times. (2023). Elon Musk is considering X's withdrawal from the EU. Brussels Times: https://www.brusselstimes.com/754072/elon-musk-is-considering-xs-withdrawal-from-the-eu
- Stawrowski, Z. (2008). Niemoralna demokracja. Ośrodek Myśli Politycznej.
- Trębecki, J. (2023). Pułapki cywilizacji informacyjnej, czyli czego nie dadzą nam social media? In: Skutki i zagrożenia cywilizacji informacyjnej. Komitet Prognoz PAN.
- Wawrzyniak, B., & Iwanowski, D. (2021). Cyfrowy monopol - Nadużycia, których dopuszczają się największe korporacje technologiczne. Instrat Policy Paper, 02/2021. Fundacja Instrat.
[1] Stawrowski, Z. (2008). Niemoralna demokracja (pp. 24-32). Cracow: Ośrodek Myśli Politycznej.
[2] Kulesza, R. (2015). Demokracje antyczne i współczesne (pp. 26-27). In: P. Fiktus, M. Marszał, H. Malewski, & J. Koredczuk (Eds.), Rodzinna Europa - europejska myśl polityczno-prawna u progu XXI wieku. Wroclaw: E-Wydawnictwo.
[3] Goodin, R. E. (2003). Reflective Democracy (pp. 75-90). Oxford: Oxford University Press.
[4] Heidegger, M., 1977. The Question Concerning Technology and Other Essays (s. 14-35). New York: Garland Publishing.
[5] Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, 7(1), s. 10-12.
[6] Morozov, E. (2013). To Save Everything, Click Here: The Folly of Technological Solutionism (s. 1-16). New York, NY: Public Affairs.
[7] Trębecki, J. (2023). Pułapki cywilizacji informacyjnej, czyli czego nie dadzą nam social media? (pp. 201-203). In: Skutki i zagrożenia cywilizacji informacyjnej. Warsaw: Komitet Prognoz PAN.
[8] Adam, M., Wessel, M., Benlian, A. (2021) AI-based chatbots in customer service and their effects on user compliance (pp. 427-445), In: Electronic Markets, IIM University of St. Gallen, vol. 31(2).
[9] John Paul II. (1981). Sollicitudo rei socialis (point 36). Vatican.
[10] Coeckelbergh, M. (2022). The Political Philosophy of AI: An Introduction (s. 44-46). Cambridge: Polity Press.
[11] Wawrzyniak, B., & Iwanowski, D. (2021). Cyfrowy monopol - Nadużycia, których dopuszczają się największe korporacje technologiczne (pp. 22-40). In: Instrat Policy Paper nr 02/2021. Warsaw: Fundacja Instrat.
[12] Newsroom Brussels Times. (2023). Elon Musk rozważa wystąpienie X z UE. The Brussels Times. https://www.brusselstimes.com/754072/elon-musk-is-considering-xs-withdrawal-from-the-eu
[13] Nemitz, P. (2023, December 16). Interview: Paul Nemitz: ‘Democracy must prevail over technology and business models for the common good’. In: Voxeurop.eu. https://voxeurop.eu/en/paul-nemitz-democracy-prevail-over-technology-business-models-common-good/
IT
EN



















