This collection is now closed to submissions.
As Artificial Intelligence (AI) is gradually applied to all facets of societal governance, there is an increasing need to explore the ways in which AI impacts democracy. The AI and Democracy collection has been designed to demonstrate a commitment to high-quality, open-access publications and original research about the ways public services and democracy are conceived and impacted by AI.
AI and machine learning affect democracy directly and indirectly. They are applied in a multitude of public-facing services of traditional institutions, including for predictive policing, banking, and risk assessments in the justice system. Public administration, too, is undergoing a revolution, wherein AI increasingly informs national and municipal government services and the optimizes the allocation of public resources. Elsewhere, many government processes are becoming ‘AI first, human second’. Likewise, our information landscape is driven by AI algorithms and newsfeeds that affect participation in democracy.
The fields of e-democracy and e-government have traditionally consolidated research on technology in public administration; but examinations of AI in democracy are also now arising in other subfields of political science and disciplines including media studies, sociology, economics, and the natural sciences. On the one hand, this reflects “the failure of e-democracy” to seriously examine the potentials of technology (Bastick, 2017). On the other, it reflects cross-disciplinary fears – including that we are becoming a “black box society” (Pasquale, 2015), that algorithms produce feedback loops that reinforce social disadvantages (O’Neil, 2016), and that technology differentially impacts vulnerable groups (Bastick & Mallet-Garcia, 2022). In politics, there are questions on how AI might fight government corruption - and on the implications of AI being deployed by citizens or governments (Köbis, Starke, & Rahwan, 2022). Likewise, there are investigations of the effects of voter-microtargeting (Matthes, Hirsch, Stubenvoll, et al., 2022); the effects of this on populism (Bergman, 2018) and on political communication (Lei & Vargo, 2020); and concerns that AI-generated disinformation may affect unconscious behaviours and, ultimately, decide elections (Bastick, 2021).
This digitization of democratic processes brings into question the potential and legitimacy of AI for government (de Fine Licht & de Fine Licht, 2020; Starke & Lünich, 2020). It begs a continual examination of algorithmic fairness and transparency; and how we should integrate ethical principles with AI (Kieslich, Keller, & Starke, 2022, König, Wurster & Siewert, 2022).
This collection aims to curate research on AI’s impact on democracy. We invite contributions on current and alternative applications and impacts of AI to society, including regulation. Topics can breach this theme from approaches, including but not limited to:
- AI in e-democracy or e-government
- Technology in the evolution of political power
- Digital automation and inequality
- Possibilities and challenges for AI and democracy
- AI and misinformation
- Technological determinism
- Narratives of techno-optimism and techno-skeptism
Submissions using qualitative and quantitative methods are welcomed, as are reflections on methodology related to studying AI and democracy.
Keywords: AI, democracy, artificial intelligence, machine learning, AI governance, regulation, big data, e-democracy, e-government, equality, inclusion, AI accountability, AI fairness, technological determinism, microtargeting, voting behavior, misinformation, techno-scepticism, techno-optimism
Any questions about this Collection? Please email
research@f1000.com
This collection is associated with the
Political Communications and
Artificial Intelligence and Machine Learning Gateways.
References
Bastick, Zach. "Digital limits of government: The failure of e-democracy." In Beyond Bureaucracy, pp. 3-14. Springer, 2017.
Bastick, Zach, and Marie Mallet-Garcia. "Double lockdown: The effects of digital exclusion on undocumented immigrants during the COVID-19 pandemic."
New Media & Society 24, no. 2 (2022): 365-383.
Bastick, Zach. "Would you notice if fake news changed your behavior? An experiment on the unconscious effects of disinformation."
Computers in Human Behavior 116 (2021): 106633.
Bergmann, Eirikur. Conspiracy & populism: The politics of misinformation. Springer International Publishing, 2018.
de Fine Licht, Karl and Jenny de Fine Licht. “Artificial intelligence, transparency, and public decision-making”
AI and Society 35, no. 4 (2020): 917-926
Kimon Kieslich, Birte Keller, and Christopher Starke. "Artificial intelligence ethics by design. Evaluating public perception on the importance of ethical design principles of artificial intelligence."
Big Data and Society 9, no. 1 (2022).
Köbis, Nils, Christopher Starke, and Iyad Rahwan. "The promise and perils of using artificial intelligence to fight corruption."
Nature Machine Intelligence 4, no. 5 (2022).
König, Pascal, Stefan Wurster and Markus Siewert “Consumers are willing to pay a price for explainable, but not for green AI. Evidence from a choice-based conjoint analysis”
Big Data and Society, No.1
(2022).
Matthes, Jörg, Melanie Hirsch, Marlis Stubenvoll, Alice Binder, Sanne Kruikemeier, Sophie Lecheler, and Lukas Otto. "Understanding the democratic role of perceived online political micro-targeting: longitudinal effects on trust in democracy and political interest."
Journal of Information Technology & Politics (2022).
O’Neil, Cathy.
Weapons of Math Destruction: How big data increases inequality and threatens democracy. Broadway books, 2016.
Pasquale, Frank.
The Black Box Society: The secret algorithms that control money and information. Harvard University Press, 2015.
Starke, Christopher, and Marco Lünich. "Artificial intelligence for political decision-making in the European Union: Effects on citizens’ perceptions of input, throughput, and output legitimacy."
Data & Policy 2 (2020).