Introduction.
When at their best, democracies are able to transform diverse beliefs into effective real-world policies. This ideal is achievable when citizens are well-informed, engaged, and open [1]. However, these favourable democratic conditions are increasingly undermined by the rise of misinformation [2] and polarisation [3], fuelled by the influence of AI. In this article, we explore how AI may yet solve the very problems it’s created and, in doing so, modernise our democracies to meet the demands of the 21st century. Before we explore AI’s potential contributions, let’s first outline and critique an existing democratic structure that will be relevant to our proposal.
 

Citizens Assemblies: A Democratic Success Story.
A Citizens Assembly involves a small, randomly selected group of citizens tasked with formulating policy recommendations on specific issues. Following expert briefings and active group debate, these recommendations are distilled into policy proposals, which are submitted to parliament. Members of parliament then use their expertise to scrutinise these proposals before potentially enacting them as policy.

Citizens assemblies engage everyday citizens in the political process beyond an infrequent and potentially disengaged act of voting. This fosters a deeper comprehension of legislation, bolsters government transparency, and cultivates a sense of democratic self-efficacy [4]. Moreover, the compromise required for a randomly selected group of individuals to reach a consensus provides an intuitive antidote to political polarisation. However, these advantages are inherently limited to the small group of citizens that partake in the assembly. Can we not extend a citizen’s assembly to encompass the whole electorate instead of just a small subset?
 

The Challenge of Scale.
There are two critical factors that impede our ability to extend citizen assemblies:

  1. Information Overload: Humans are wired for small-group discussions; therefore, we’re unable to digest the contributions of the millions of interlocutors that inhabit our large-scale societies.
  2. Insufficient Expertise: Experts lack the bandwidth to directly guide each member of the electorate to write informed and effective policies.

In the following section, we will explore how AI can address these challenges and augment assembly via a digital platform for the entire electorate. But what would such a platform look like?
 

Addressing Information Overload with AI.
Imagine a platform akin to a political version of Wikipedia [5], where individuals have the opportunity not only to read but also to support, debate, and edit policy proposals. Within this vast digital landscape, citizens would likely encounter a barrage of content, making it difficult to navigate and participate effectively. Here, AI emerges as a natural solution for filtering this wealth of information.

To better understand the role of AI in this digital ecosystem, we can draw a parallel with how users navigate social media platforms like Twitter. Instead of manually sifting through an endless stream of tweets, Twitter users are presented with an AI-curated feed that highlights the most relevant and engaging content. Similarly, AI can easily be employed to distil the digital citizens assembly into a digestible feed of chat rooms, petitions, and policy proposals. Notably, in contrast to Twitter, the underlying AI-algorithms need not maximise a profit incentive, but would ideally be fine-tuned to foster democratic norms.
 

Addressing Insufficient Expertise with AI.
The concept of such a platform appears promising, especially when framed as a direct political analog to Wikipedia. However, crafting legislation is a significantly more complex process than constructing an encyclopaedia [6]. This complexity poses a challenge, given that most users lack the expert guidance required to craft effective policies. Fortunately, AI is once again well-suited to help and can assist users in productively engaging with their feed.

Large Language Models* (LLMs), drawing upon an extensive knowledge base of historical legislation can guide citizens in writing policy proposals. Users can initiate this process by providing an informal statement on how an existing issue paper could be improved or enhanced. In response, the LLM would get to work, generating a series of detailed legislative changes (along with annotations) for the user to review. Once the user is satisfied with the proposed changes, they can choose to incorporate these informed recommendations into the public sphere.

 

Conclusion.
Currently, AI in conjunction with our laissez faire approach to digital infrastructure is fuelling a retreat of democratic norms. However, this article has outlined an alternative path for AI’s role in our democracy, emphasising its potential to both mend and extend the sclerotic status quo. Specifically, we’ve proposed ways in which AI can overcome two of the major hurdles in extending the scope of citizen’s assemblies.

These changes would hopefully foster a dynamic relationship between citizens and policy-makers, instilling a stronger sense of ownership in the democratic process. However, as a powerful tool, AI can equally be used to silence, amplify, or distort the public voice [7,8]. Whether or not our greatest democratic hopes of a well-assembled electorate are realised ultimately rests in the hands of those who use, create, and oversee this technology.

 

Ethical Concerns Regarding the Implementation.
Despite the potential benefits of our proposal, it is vital to recognise the risks of this ‘Augmented Assembly.’ One such challenge is the need to safeguard against foreign interference without compromising individuals’ privacy. Moreover, the political bias of LLMs could potentially influence individual users’ policy suggestions. Although our intention in this article wasn’t to provide a detailed plan for implementing our proposal, it may be valuable to briefly highlight example measures to tackle each of these challenges.

Upon joining the paltform, a rigorous citizen-verification process, coupled with the random allocation of anonymous profiles, is one potential balance between privacy and security. Additionally, an impartial oversight body with the capacity to probe the objectivity of LLMs in a manner analogous to ECOA [9] reviews could help to address political bias. These ethical safeguards are far from comprehensive, but they hopefully demonstrate that while these challenges are formidable, they are not insurmountable.

*Footnote: For more information on the consequences of LLMs in scalable deliberation, please see “Opportunities and Risks of LLMs for Scalable Deliberation with Polis.” [10]

 

References.
[1] Open Democracy: Reinventing Popular Rule for the Twenty-First Century (Chapter 2). Helene Landemore. Princeton University Press (2020)

[2] Social Media and Bullshit. Rasmus Kleis Nielsen. Social Media + Society (2015)

[3] The MAD model of moral contagion: The role of motivation, attention, and design in the spread of moralised content online. William J. Brady et al. Perspectives on Psychological Science (2020)

[4] Jury service and electoral participation: A test of the participation hypothesis. John Gastil et al. The Journal of Politics (2008)

[5] To Thrive, Our Democracy Needs Digital Public Infrastructure. Eli Pariser and Danielle Allen. Politico (2021)

[6] Should we automate democracy? Johannes Himmelreich. Oxford Handbooks Online (2021)

[7] Will AI Make Democracy Obsolete? Theodore Lechterman. Public Ethics (2021)

[8] Political Theory of the Digital Age (Chapter 3). Mathias Risse. Cambridge University Press (2023)

[9] Equal Credit Opportunity Act (ECOA) baseline review procedures. Consumer Financial Protection Bureau (2019)

[10] Opportunities and Risks of LLMs for Scalable Deliberation with Polis. Christopher T. Small et al. arXiv:2306.11932 (2023)

New Comment