Voting Assistants

by Wiona12 min read6th May 20215 comments


World Optimization


This post discusses voting assistants - computer programs that help voters make choices - and gives some reasons for them being a likely development for the future of democracy. It describes current voting advice assistants and concludes with notes on how to positively shape their future development.


Our future as a human society is influenced overwhelmingly by the way we collectively make decisions. Both decisions that directly impact our quality of life and those that will increase the chances of survival of future generations are shaped by our political landscapes, voting rules, and ultimately every voter's level of information and interest in the process. It is not enough to make scientific progress to increase our chances of survival. This progress must also be translated into actions, laws, and institutions. 

The design of the voting process is critical for decision-making in democratic countries. Given the same starting conditions, vastly different results can be obtained if votes are counted differently or voters are asked different questions. This means we can strongly influence outcomes by shaping processes. This influence could be used to counteract factors that make democracy particularly inefficient, such as voters being deliberately misinformed or not having the capability to understand all the issues they vote on. Voting assistants could help with this and also increase voter turnout.

These programs, which support voters in making well-informed and thoroughly considered decisions, are already used by many voters mostly in European democracies, and will keep becoming more powerful with the use of more data and better recommendation algorithms. A vision for how they could affect the future of our political systems is known under the term Augmented Democracy [4]

While there are many concerns to be aware of, these assistants have the advantage of protecting human autonomy to a certain level, as they still give humans the possibility to cast binding votes themselves as opposed to uncontrolled "welfare optimization" by AI rulers. Nevertheless, they let us profit from the superior reasoning capabilities of computers and could be made more transparent than private AI advisors used by individual politicians and parties.

We will discuss autonomy, transparency, and fairness issues and give some ideas on how we could positively shape the development of voting assistants. There are many open questions, and this post can be seen as a starting point to think more about the interaction of technological developments and democratic processes.

Voting Assistants

Definition: In this post, a voting assistant is any computer-based system that helps humans choose how to vote.

The voting process passes information on people’s preferences from individuals to the political system. There is a strong communication problem though: voters do not know what the best choice is given their preferences. Even if they make the best choice, this choice only communicates crude information on what they wanted to express (e.g. it might be unclear which part of a party program convinced them). This imperfect communication leads to suboptimal decisions.

In direct democratic systems, the transmission from known preferences to elicited preferences tends to be more accurate than in indirect democracies. On the other hand, the known preferences are further from the real preferences since there are many topics citizens have to vote on and they, therefore, have less time to consider each topic. This leads to an overall low accuracy of expression. Voting assistants can solve this issue.

A far-future story

Imagine election day is coming up. But there are no posters in the city or debates on television. Instead, people open their digital voting assistant, which has been with them for the past few years. It has data on their interests, desires, socio-economic status, perhaps even change in dopamine levels in response to a set of stimuli. It knows them well (which is normal, as people nowadays have AI assistants for almost every part of their life). It awaits with a new set of questions. These are not complicated questions on whether your country should be part of a union of countries or who should become the president for the next 4 years. Instead, they are questions you can relate to: How satisfied are you with childcare in your neighborhood? Would you rather help local homeless people with impact x or donate to foreign aid with impact y? How strongly do you feel affected by the air quality in your city? There are some familiar questions to realign the assistant's moral compass created by asking you moral dilemma questions. As being informed about the exact impact of your decisions in previous elections has motivated you to participate, you have chosen the detailed version. After you have filled in your answers, the assistant gives you an overview of what its predictions for suitable decisions are. It could, for example, conclude that given your income you could afford to pay x more for the same childcare than others in the neighborhood and would find it fair based on your replies to the moral questions. The decisions seem plausible. You confirm them. They are sent out to the central aggregation unit as a detailed anonymized profile of preferences. This unit takes the input of preferences on various decisions that people submit through their assistants and combines them to a final set of decisions by simple voting rules similar to the ones currently used in elections. The averages of transmitted preferences and the overall decisions are published. If there were discrepancies, there are protocols in place to facilitate prompt pen and paper voting on the issue.

This is only one (quite extreme) scenario for how voting with assistants might work. The basis for this story is the current state of voting advice assistants. These will be described in the next section. There is a wide spectrum of how much decision-making authority could be transferred from humans to machines, and different flavors are discussed later.

Current Voting Assistants

Currently, the main type of voting assistants used are Voting Advice Applications (VAAs). These applications support voters mainly in representative democracies in deciding to vote for specific candidates or parties. To do this, they give political statements such as "The EU should set itself higher goals for the reduction of CO2-Emissions" (example taken from the application Wahl-o-mat for the European election). Voters state how much they agree with each statement and can afterward weigh which of the statements they find most important.

The voter's replies are then compared to party positions. Party positions are either collected by directly asking parties, which then, for example, conduct internal voting processes to select their answers, or by asking experts to judge the party positions based on their programs, actions, and statements. The comparison is done via distance measurements. These are often Euclidean distances on spaces in which each statement is represented by one dimension.

VAAs are very popular. In the last German national election in 2019, the Wahl-O-Mat was used 9.8 million times, which would correspond to 20% of the voters if every use corresponded to a separate voter. With a similar calculation, over 40% of all voters in the 2012 Dutch election used a VAA. Moreover, some form of a VAA exists in almost all EU27 countries and some beyond Europe [2].

Development trends

We discuss trends in voting assistant development, as they can inform our discussion on how they might evolve in the future.

Although distance measurements between parties and user preferences are the most commonly used technique at the moment, other strategies are also used to match voters with parties.

  • Machine learning approaches have been developed to reduce assumptions made about the space in which distances are measured.  [6]. With learning algorithms, the accuracy of the prediction can also be improved with fewer questions.
  • Social system approaches: Here, voters are matched with the party that other voters with the most similar views would vote for. Often a restriction is applied where only other voters who state being very informed about and interested in politics are taken into account [7].

There is some discussion on the future of VAAs to be found in academic literature. Garcia et al. wonder "whether democracy would still be so 'unthinkable' without political parties" and mentions a decline in party identification as a possible reason for a shift towards more direct democracy [8].

A particularly publicly recognized vision for the future is "Augmented Democracy". It was introduced in a TED talk by César Hidalgo. He describes voting assistants as "digital twins" which learn about users’ preferences and then partake in a parliament that makes political decisions [4]. His conception is very similar to the far-future story above.

How might these trends continue in the longer-term future?

We suggest looking at the developments over a long time scale as along a spectrum of directness and transfer of decision making from humans to machines. At the moment, our political systems are mostly representative rather than direct. Voting assistants only have little decision-making power (they don't have any direct power but rather influence humans). They still require humans to form opinions on political statements that can be rather complex. Therefore, the quality of voting decisions made through these assistants depends on how well informed, rational, and invested voters are.

The far-future story presents a vision of a very direct system with human decision-making guided by machines. If human autonomy is lost beyond this, we would arrive at AI welfare aggregation. Between today and the future extreme, there are many other states we could pass through or choose to remain in if we wish to preserve more human autonomy for example.

Increased directness is currently not strongly supported because of concerns regarding voter rationality and information. To avoid voters being overwhelmed by hard decisions they would have to make in more direct systems, voting assistants would probably need to be programmed to ask new types of questions. They would break down larger decisions into small parts. They could for example learn about a voter's moral positions with moral dilemmata similar to the moral machines experiment [9]. With simple questions, they could find out what a voter considers a fair division of goods, then predict how different welfare state models affect the division of goods in society and suggest to vote accordingly. Initially, human-made analysis could be part of these systems, later computers could generate all parts of the system. This, of course, requires better world models than AI systems currently have, on which the assistants can base the breakdowns. Voting assistants might also their recommendations on educational aspirations, income, freedoms one wishes to have, stress levels, emotional response to nature in the neighborhood, the personal priority of health, concern for future generations, and many more.

This means that more directness and more power allocation to machines would probably go hand in hand. The diagram below illustrates the space of developments.

Dimensions of the design space of technology-aided voting systems: current voting advice assistants are used in representative democracies (low directness), increasing directness without transferring more decision power to machines leads into the area above the diagonal in which humans have to answer many detailed questions. Combinations of directness and machine power on the diagonal seem most desirable.


While increased use of voting assistants seems to be a natural continuation of current developments, a shift towards direct democracy cannot yet be seen. Over time, however, we can expect more popular support for direct democracy if the major drawback of voters not being well-informed about decisions is addressed by voting assistants.

How likely is such a development?

Arguments for why such a development seems likely have been mentioned in the text above. We briefly summarize them:

  1. It would be in line with other technological developments which transfer decision-making authority from humans to computer programs in other areas of life.
  2. Current VAAs are a step in a similar direction. They could be developed further to become more impactful voting assistants and some trends in this direction can already be observed.

Arguments against that such a development is not likely:

  1. Other scenarios might be more stable: Problems faced by democracy in the future, such as super persuaders, are not eliminated by voting assistants. This could create pressure to take humans completely out of the loop. Autocracies might profit more from technological progress than democracies thus making stable autocracy scenarios more likely and voting assistants playing an important role globally less likely.
  2. Insurmountable problems with fairness and AI control: If we cannot solve current problems surrounding the fairness of algorithms and AI control, letting AI influence our voting systems might not be wise. This could cause humans to take measures against voting assistant use which would make them less likely.

Apart from voting assistants seeming likely, we can also be interested in them because of their desirable properties. Given that strong forces are pulling us towards technological democracy trajectories, voting assistants could be a realization of these trajectories we should actively pursue. This cannot be said without reservations as many issues remain to be resolved, three of which we will now elaborate on.

Autonomy, Transparency, Fairness

Many of the concerns we currently have about other AI systems also apply to voting assistants. 


 A transfer of decision-making from humans to machines can result in a decrease in human autonomy. This decrease can be justified by machines making better decisions in the interest of humans. On the other hand, offloading less important decisions to machines would free up human capacities for more important issues. Autonomy can also be preserved by mechanisms such as checking every decision or giving humans veto rights.

One aspect of autonomy reduction is particularly impactful in the case of voting assistants: if voting systems change in response to voting assistants being used, it might at some point become infeasible to participate in the voting process without using an assistant. 


 Transparency is vital to democratic processes. Current machine learning techniques are known for often being opaque but human decision maker’s reasoning is not always transparent either. With advances in AI transparency, we might hope that we can understand algorithmic decision-making processes better than those taking place inside human minds.


We differentiate between three different notions of fairness:

I. Algorithmic Fairness
Algorithmic fairness means that political parties or policies are not discriminated against based on factors that should not be considered relevant for the decision by the algorithm underlying the voting assistant. For example, policies that negatively affect areas in which people tend to provide less data to voting assistants should not be favored. This problem is connected to Provider-Fairness in recommender systems for example in online shops or on media platforms. Algorithmic fairness is already very relevant in today's voting assistants, where the structure of questionnaires and evaluation can affect a party's chances.

Voting assistants would also not be fair if they were able to learn the preferences of some voters better than others (for example due to underrepresentation in training data) - this makes people’s preferences count differently as some of the suggested votes more precisely reflect the true preferences than the votes suggested to the disadvantaged users. 

Since perfect fairness can likely not be achieved, the most important fairness criteria for voting assistants will have to be agreed upon by the population of the countries using the assistants with the guidance of experts. This necessitates a discussion on which factors should be allowed to influence decisions, a question currently hidden by the opaqueness of the human decision-making process.

II. Accessibility Fairness
In the extreme case of performance unfairness, the voting assistant is not accessible for some voters because of costs of usage including access to electronic devices and internet connection. This could be prevented to some degree by providing public voting assistant access points. However, these points cannot support the regular gathering of data as well as personal devices can, thus again resulting in lower performance.

III. System Fairness
System fairness relates to how fair the democratic system truly is in practice. This also involves how people are questioned and whether that leads to biases or can be used to de-bias the voting process. Humans have many unconscious biases that also affect voting decisions (for example candidates' appearance influences the results [11]). De-biasing would mean analyzing the collected data for decisions that are not purely accounted for by aspects the user has chosen to incorporate but instead have an unconscious background. Here it becomes especially evident how unclear the definition of fairness is. For example: Would de-biasing also include affirmative action? And if yes, how much of it? Should de-biasing always take place or does protecting human autonomy mean allowing people to not let their biases be erased?

Whether and how these issues are resolved will be shaped by the environment in which they are developed. We conclude by discussing what this environment should look like in the next section.

How can we make sure voting assistant development is done well?

Set high standards: Voting assistants could potentially be great for our democracies; however, only if they fulfill certain conditions. The European Consortium for Political Research writes in the Lausanne Declaration that VAAs should be "open, transparent, impartial and methodologically sound" [9]. They call for the funding of developers to be made transparent and algorithms to be documented. To hold developers, which currently include entrepreneurs, universities, NGOs, and government-affiliated organizations [8], accountable, much more detailed criteria would have to be agreed on and publicly enforced. The German Wahl-O-Mat has already been held legally accountable due to not guaranteeing equal opportunity for all parties, and it should be made clear how this would also be possible for other VAA providers not associated with the government.

Hidalgo suggests a marketplace model for developers [4]. Marketplaces would have to be built in such a way that access fairness is fulfilled and developers are incentivized to build transparent models. Approaches to avoid a race to the bottom dynamic which are being discussed in the context of AI safety could be applied here.

Some other properties we might want advanced technological voting systems to fulfill:

  • Awareness:  Whenever a voter makes decisions or takes actions that influence the voting outcome, they should be aware of that.
  • Rationality: A voter should have a reasonable ability to foresee what her decision leads to (e.g. saying “I love cats” should not favor cat-banning laws). Even if a voting system does not rule out strategic voting (as shown in some impossibility theorems), it could fulfill rationality: no strategic voting would mean that there is no better way to promote one’s view than expressing it truthfully, while rationality means that expressing it truthfully is an effective way to promote one’s view).
  • Universal access: Every voter gets the same chance to express their views. The voices of those who provide more data or are more eager to express their views are not amplified without good reason.

Find common ground: When it comes to issues such as de-biasing, it is important that we collectively agree on what purposes we want our voting systems to fulfill. This could happen through public discussion that can inform constitutional specialists. We will want to build a process that is stable enough for governance but can nevertheless be adapted in the future.


Voting assistants are only a small part of a possible future democracy trajectory shaped by technology but their further development and widespread adoption would be a natural continuation of tendencies we observe at the moment. We should therefore keep them in mind when researching humanity’s longer-term future. We now have the opportunity to create the right starting positions for them to be implemented in safe and fair ways by doing research, especially on the structure of human preferences and collective decision-making.


This blog post was written during a summer research fellowship at FHI. I thank everyone involved for this opportunity and especially the organizers Rose Hadshar and Eliana Lorch. I also thank my mentor Ondrej Bajgar for all his input on the topic and post, as well as his outstanding support during the fellowship.


1  Bundeszentrale für politische Bildung, Wahl-O-Mat European election 2019,, last accessed: 24.09.2020

2 Garzia, D. and Marschall, S. (2012) ‘Voting Advice Applications under review: the state of research’, Int. J. Electronic Governance, Vol. 5, Nos. 3/4, pp.203–222.

3 Nick Bostrom, Allan Dafoe, and Carrick Flynn. Public policy and superintelligent ai: A vector field approach 1 ( 2018 ) version 4.3

4 Hidalgo, C., last accessed: 24.09.2020

5, last accessed: 24.09.2020

6 Guillermo Romero Moreno, Javier Padilla & Enrique Chueca (2020) Learning VAA: A new method for matching users to parties in voting advice applications, Journal of Elections, Public Opinion and Parties, DOI: 10.1080/17457289.2020.1760282

7 I. Katakis, N. Tsapatsoulis, F. Mendez, V. Triga, and C. Djouvas, "Social Voting Advice Applications—Definitions, Challenges, Datasets and Evaluation," in IEEE Transactions on Cybernetics, vol. 44, no. 7, pp. 1039-1052, July 2014, doi: 10.1109/TCYB.2013.2279019.

8 D.Garcia et al., "Indirect Campaigning: Past, Present and Future of Voting Advice Applications" from The Internet and Democracy in Global Perspective: Voters, Candidates, Parties, and Social Movements, edited by Bernard Grofman, et al., Springer International Publishing AG, 2014. pp.25-38

9 Awad, E., Dsouza, S., Kim, R. et al. The Moral Machine experiment. Nature 563, 59–64 (2018).

10 Garzia, Diego, and Stefan Marschall (2014). ‘The Lausanne Declaration on Voting Advice Applications’, in: Diego Garzia and Stefan Marschall (eds.), Matching voters with parties and candidates. Voting advice applications in comparative perspective. Colchester: ECPR Press, S. 227–228. 

11 Ahler, D.J., Citrin, J., Dougal, M.C. et al. Face Value? Experimental Evidence that Candidate Appearance Influences Electoral Choice. Polit Behav 39, 77–102 (2017).

12 Ladner, Fivaz, More than toys? A first assessment of voting advice applications in Switzerland. (2010)

14 Robin Burke. Multisided fairness for recommendation, 2017

15 last accessed: 22.09.20


5 comments, sorted by Highlighting new comments since Today at 9:15 AM
New Comment

Basically: We want voting to be driven by something else then holding politicians accountable for their actions and chose ways to evaluate voting that results in it being harder to hold politicians accountable. 

I think a well-designed system would still hold politicians accountable since it would consider how likely a candidate is to keep their promises or to change positions when deciding whether a vote for them would be effective at realizing your preferences.

But the post advocates systems that are impartial not systems that are well-designed. To have a well-designed system you actually have to actually take stances. You have to treat parties that actually govern differently then one's that don't have any power to fulfill past promises.

I would rather that my AI voting assistant learns my preferences and calculates the most effective way for me to vote (strategically), but then merely explains its reasoning and leaves the final decision up to me.