Review

Motivation: Modern democratic institutions are detached from those they wish to serve [1]. In small societies, democracy can easily be direct, with all the members of a community gathering to address important issues. As civilizations get larger, mass participation and deliberation become irreconcilable, not least because a parliament can't handle a million-strong crowd. As such, managing large societies demands a concentrated effort from a select group. This relieves ordinary citizens of the burdens and complexities of governance, enabling them to lead their daily lives unencumbered. Yet, this decline in public engagement invites concerns about the legitimacy of those in power. 

 

Lately, this sense of institutional distrust has been exposed and enflamed by AI-algorithms optimised solely to capture and maintain our focus. Such algorithms often learn exploit the most reactive aspects of our psyche including moral outrage and identity threat [2]. In this sense, AI has fuelled political polarisation and the retreat of democratic norms, prompting Harari to assert that "Technology Favors Tyranny" [3]. However, AI may yet play a crucial role in mending and extending democratic society [4]. The very algorithms that fracture and misinform the public can be re-incentivised to guide and engage the electorate in digital citizen’s assemblies. To see this, we must first consider the way that a citizen’s assembly traditionally works. 

 

What’s a Citizens Assembly: A citizen's assembly consists of a small group that engages in deliberation to offer expert-advised recommendations on specific issues. Following group discussion, the recommendations are condensed into an issue paper, which is presented to parliament. Parliamentary representatives consider the issue paper and leverage their expertise to ultimately decide the outcome. Giving people the chance to experiment with policy in a structured environment aids their understanding of the laws that govern them, improves government transparency, and promotes feelings of democratic self-efficacy [5]. Moreover, the compromise required for a randomly selected group of individuals to reach a consensus provides an intuitive antidote to political polarisation and calcification.

 

How AI Can Augment Assembly: Having explored the conventional workings of a citizen's assembly, we will now examine how AI can guide public inquiry, enabling individuals to make meaningful contributions to their political landscape. At first glance, forming an assembly of the whole electorate might seem impossible, as citizens would be overwhelmed by the sheer amount of content produced in such a large-scale discussion. However, Artificial intelligence, being highly scalable, is well-suited for filtering the vast content generated by large assemblies (social networks) and presenting it to each user as a digestible feed of information. Such algorithms could generate a feed of petitions, chatrooms, and issue papers optimised to promote criteria centred around democratic norms. These criteria could encompass factors that include but are not limited to tolerance, social connectedness, engagement, respect, factual accuracy, and reflectiveness. By maximising these liberal objectives, AI could bolster the distribution of ideas across the voting population via a feed of democratic opportunities tailored for each individual user.

 

Beyond a well-filtered feed, informative deliberation demands nuanced interaction between individuals. Wikipedia is often cited as a prime example of collective deliberation [6], with users collaborating to curate an online encyclopaedia. Drawing inspiration from this model, a political counterpart known as WikiDemocracy aims to facilitate cooperative editing of issue papers, akin to the process of editing Wikipedia articles. WikiDemocracy has been described [7] as a system that puts "drafting and initiating legislation in the hands of citizens instead of representatives or legislative bodies". However, writing legislation is more complex than creating an encyclopaedia, and some scholars are concerned that this could prevent WikiDemocracy from achieving the same success as Wikipedia. 

 

Fortunately, AI is also well-suited to help in this context. Along with optimising the dissemination of ideas into a feed, AI could guide users to productively interact with their feed. More specifically, by examining historical legislation and user-feedback, AI could assist citizens in generating alterations to pre-existing content. For example, citizens could prompt a GPT with an informal statement on how an issue paper could be improved. The GPT would then generate detailed legislative changes for the user to review and update. Once the user is happy with the proposed changes, they can then be incorporated into the public sphere. This use of generative AI could reduce the expertise-gap that many fear will hold back WikiDemocracy.

 

Conclusion: We have seen how AI could reduce two of the major hurdles in producing a nation-wide citizen’s assembly: Information Overload and Expertise. Despite the advantages, it is vital to recognise that this 'Augmented Assembly' carries the risk of being overly-paternalistic, hijacked, or misused [8]. Democratic infrastructure cannot exist in isolation; it requires close oversight and regulation to faithfully uphold democratic norms. Therefore, while this article illustrates how AI might amplify a more sophisticated public voice, an algorithm on autopilot is no voice of morality [9]. As a powerful tool, AI can equally be used to silence, amplify, or distort the public voice [10]. Whether or not our greatest democratic hopes of a well-assembled electorate are realised ultimately rests in the hands of those who use, create, and oversee this technology. 

 

References:

[1] Open Democracy: Reinventing Popular Rule for the Twenty-First Century (Chapter 2). Helene Landemore. Princeton University Press (2020)

[2] The MAD model of moral contagion: The role of motivation, attention, and design in the spread of moralized content online. William J. Brady et al. Perspectives on Psychological Science (2020)

[3] Why Technology Favors Tyranny. Yuval Noah Harari. The Atlantic (2018)

[4] Augmented Democracy. César Hidalgo. https://www.peopledemocracy.com/ (2019)

[5] Jury service and electoral participation: A test of the participation hypothesis.  John Gastil et al. The Journal of Politics (2008)

[6] To Thrive, Our Democracy Needs Digital Public Infrastructure. Eli Pariser and Danielle Allen. Politico (2021)

[7] Should we automate democracy. Johannes Himmelreich. Oxford Handbooks Online (2021)

[8] The Threat of Algocracy: Reality, resistance and accommodation. John Danaher. Philosophy & Technology (2016)

[9] Will AI Make Democracy Obsolete. Theodore Lechterman. Public Ethics (2021)

[10] Political Theory of the Digital Age (Chapter 3). Mathias Risse. Cambridge University Press (2023)

New Comment
5 comments, sorted by Click to highlight new comments since:

LessWrong is a forum where AI-X risk is one of the main topics. 

Proposing to let AI be in control of the governance sounds very risky and your post basically completely ignores all the problems. 

[-][anonymous]21

In preparing for the consequences of AI it is arguably equally important to flesh out the world that we want as well as the a world that we fear. I acknowledge that as "a powerful tool, AI can equally be used to silence, amplify, or distort the public voice"; however, my interest for this piece to focus on the positive as opposed to the negative case - more on the negative case is in the references. It is almost inevitable that AI will shape public discourse (it already does) - my article aims to discuss the ways that this could be done more fruitfully. It is also notable that everything proposed is either human-in-the-loop or human-on-the-loop and does not put AI directly in control of government as opposed to existing popular proposals [4]. 

See e.g. Opportunities and Risks of LLMs for Scalable Deliberation with Polis, a recent collaboration between Anthropic and the Computational Democracy Project:

Polis is a platform that leverages machine intelligence to scale up deliberative processes. In this paper, we explore the opportunities and risks associated with applying Large Language Models (LLMs) towards challenges with facilitating, moderating and summarizing the results of Polis engagements. In particular, we demonstrate with pilot experiments using Anthropic's Claude that LLMs can indeed augment human intelligence to help more efficiently run Polis conversations. In particular, we find that summarization capabilities enable categorically new methods with immense promise to empower the public in collective meaning-making exercises. And notably, LLM context limitations have a significant impact on insight and quality of these results.

However, these opportunities come with risks. We discuss some of these risks, as well as principles and techniques for characterizing and mitigating them, and the implications for other deliberative or political systems that may employ LLMs. Finally, we conclude with several open future research directions for augmenting tools like Polis with LLMs.

I'm personally really excited by the potential for collective decision-making (and legitimizing, etc.) processes which are much richer than voting on candidates or proposals, but still scale up to very large groups of people. Starting as a non-binding advisory / elicitation process could facilitate adoption, too!

That said, it's very early days for such ideas - and there are enormous gaps between the first signs of life, and a system to which citizens can reasonably trust nations. Cybersecurity alone is an enormous challenge for any computerized form of democracy, and LLMs add further risks with failure modes nobody really understands yet...

(opinions my own, etc)

[-][anonymous]21

Thank you very much for sending this paper through. It provides a very detailed exploration of ideas that are closely related to my article. I completely agree with you that moving beyond 1-dimensional voter feedback (signing petitions/voting) for some components of political life may be truly transformative. However, I also agree that presently these systems are not possible and trust-worthy on large scales as they lack performance and surrounding infrastructure. Personally, I see the performance issue as less consequential. Although hallucinated and overlooked conditions may remain a persistent problem for LLMs, their performance in reasoning tasks is rapidly improving with advancements like COT prompting and its extensions. As such, we should at the very least start preparing for the next generation of highly competent language models. In my opinion, the trustworthiness and security of digital public infrastructure is likely to remain a thornier problem. However, verifying humans online (as opposed to bots) and monitoring algorithms for adversarial attacks are problems of broad societal concern - and as such will hopefully receive increasing efforts and attention in the coming years.

I believe that Betteridge's Law of Headlines applies here.