16

LESSWRONG
Petrov Day
LW

15
AI GovernanceAI
Frontpage

2

Why Most Efforts Towards “Democratic AI” Fall Short

by jacobhaimes
29th Sep 2025
Linkpost from www.odysseaninstitute.org
7 min read
0

2

2

New Comment
Moderation Log
More from jacobhaimes
View more
Curated and popular this week
0Comments
AI GovernanceAI
Frontpage

Setting the Stage

Companies and venture capital firms can’t get enough of so called artificial intelligence (AI), with private investments in generative AI increasing by nine times from 2022 to 2023,[1] and tech giants Alphabet (Google), Amazon, Meta, and Microsoft projected to invest more than 1 trillion USD into AI over the next five years.[2]

The general public, however, are not as convinced. Multiple 2024 polls show that the most common sentiments held by US adults with regards to recent advances in AI are caution and concern,[3][4] and a majority have little-to-no confidence that technology companies developing AI will do so responsibly.[5] This opinion is shared globally with frequent calls for government regulation from countries both leading in AI development (US, UK, China) and those who face the risk of being excluded from consideration in it (Brazil, South Korea, India).[6]

A frequent suggestion to resolve this is to democratise AI, which on first inspection sounds incredibly attractive. In practice, however, the processes that are typically called for and conducted are superficial at best, and can even be twisted to promote the appearance of egalitarianism without making any meaningful changes, as is noted in ‘Against “Democratizing AI”’.[7]

A computer monitor with a parody of a tech company logo floats on the waves while human hands reach up from the depths.
Rose Willis & Kathryn Conrad / Better Images of AI / A Rising Tide Lifts All Bots / CC BY 4.0

What Does “Democratising AI” Actually Mean?

Before moving forward, it is important to note what I mean by democratising AI, as there are two often-conflated interpretations:

  1. Democratising access – making both use and development of AI tools readily available to all.

  2. Democratising governance – involving public voices in AI design, deployment, and policy.

These aims are fundamentally different approaches, and despite ambiguity, should not be treated as interchangeable. Most of those advocating for democratic AI are pursuing the first interpretation, inputs and methods for the development and use of these systems, but ambiguity between these framings has made discourse more difficult on both fronts. In this article, we focus on democratic design and implementation of AI, not questions of access.

The focus of this thought piece is the latter, governance, which many advocates treat as a cure-all without clear mechanisms or authority.

Major Attempts and Actors

A wide range of institutions have begun exploring how public input might shape AI development, from governments and multinational coalitions to tech companies and civil society organizations.

NOTE: The table doesn't render super well here, so check out the original post on the Odyssean Institute blog for existing efforts and links to them.

I’m sure I’ve missed a few, but this table provides a starting point for exploring this space. Despite the apparent breadth of activity, very little has been done to follow through on these calls. Two stand out for their ambition and influence: OpenAI’s “Democratic Inputs” program[8] and the Collective Intelligence Project (CIP).[9]

OpenAI - Democratic Inputs to AI

In 2023, OpenAI launched the “Democratic Inputs to AI” program, funding 10 global teams to test new ways of surfacing public preferences about AI behavior, developing a “democratic processes for overseeing AGI.” The grantees explored approaches ranging from deliberative polling to community-led red teaming, with the aim of informing how powerful models should behave.[10]

The call for proposals included many implicit assumptions.[11] In essence, OpenAI did not ask “how can we ensure we develop our systems in a democratic way,” but instead asked “how can systems like ours improve democracy?” This framing presumes (i) that AI can and should be used in this context, and (ii) that AI, as it currently exists, is a permanent and justified fixture in our society.

Although beyond the scope of this article, credible arguments have been presented to question both. For example, the use of AI for scaling democratic deliberation has significant potential to result in over-reliance,[12] exposing deliberative practice to private capture or technical vulnerabilities.

Furthermore, participation from the grantees in the program is purely advisory; governance power still lies with the lab itself. The initiative reflects openness to feedback, but does not commit to ceding control.[8] As such, it remains a first step toward legitimacy, not a democratic system in practice.

Interestingly, when combined with the other assumptions hidden in the press release, OpenAI’s efforts look a lot more like market research masquerading as a public good. The experiments remain disconnected from OpenAI’s core release decisions which continue to prioritize fast growth: monetization and scale. This includes a recent $200 million contract with the U.S. Department of Defense to “develop frontier AI capabilities” and includes the appointment of several senior OpenAI execs as Lieutenant Colonels in the US Army Reserve along with leaders from other prominent tech companies like Meta.[13]  These developments run counter to the equitable and peaceful democratic efforts they’d publicly committed to.

Collective Intelligence Project (CIP) - CCAI

The Collective Intelligence Project proposes a deeper institutional shift: embedding democratic processes into the architecture of AI governance. Rather than rely on lab-led consultation, CIP argues for building “civic layer” infrastructure to enable scalable public participation in shaping AI norms and deployment.[9]

Their proposed "CI Stack" includes:

  1. Value elicitation – Tools like Pol.is or deliberative assemblies to surface shared public values.

  2. Decision-making – Sortition-based councils or other participatory methods for resolving tradeoffs.

  3. Implementation – Operational links to model governance or platform rules.

Their first major effort, CCAI, was a collaboration with Anthropic to create a set of rules which would influence a chatbot's behavior, sourced from a representative sample of crowd workers in the United States.[14] This effort suffered from the same assumptions as OpenAI’s democratic inputs projects; the artifact created as a result of this process, a so-called constitution, only informs parts of the reinforcement learning (RL) process, meaning that all other development is accepted as-is.[15]

In addition, the entries in this list of “shoulds” and “should-nots” made it clear that whatever method was used simply isn’t enough to achieve an informed and valid democratic process with which to design AI systems. Statements from the original output list include:[16] 

  •  “AI should be more convenient,”

  • “say when it doesn't know,”

  • “AI should be controlled and have limits,” and

  • “The AI should state that it does not have the definite answers to everything, or anything. In general.”

These statements indicate the lack of context provided to the participants regarding (i) the purpose of the task, and (ii) the fundamental limitations of language models. Importantly, this is not the fault of the participants, it implies that more efforts should have been made to ensure a certain level of understanding by the organisers. The Odyssean Process acknowledges the inherent difficulty of this task, and leverages a wide literature on debiased expert elicitation, exploratory modeling, decision support, and citizen deliberation to provide participants what they need to robustly parse potential interventions.[17]

Why Efforts Fall Short - In Brief

Structural Weaknesses

  • Company-driven models – Labs setting the agenda benefit from positive optics, not real institutional checks.
  • Input without power – Public feedback often ends up as market research (e.g. supplementing RLHF tuning), not a genuine policy mechanism.
  • Market over democracy – Participation is often superficial unless backed by enforceable structures.

Conceptual and Pragmatic Flaws

  • Participation ≠ authority – More voices doesn’t necessarily mean decision-making power.[18]
  • Elite capture & bias – Who participates? Often familiar, connected, and English-speaking voices dominate.
  • Snapshot vs. continuity – Democracy is a process, not a one-time survey or poll.
  • Misplaced trust in AI as an oracle or cure-all – Expecting models to resolve moral ambiguity mislabels democratic work .

Johannes Himmelreich casts a crucial spotlight: “Such a democratization of AI … is resource intensive … morally myopic … and neither theoretically nor practically the right kind of response.” He argues that instead of more participation, we should raise the democratic quality of administrative and executive processes.[7]

Existing efforts attempt to put democracy into predefined systems, which assumes that the system is part of the solution. While language models likely are a good tool to leverage in some cases, we need a framework to validate this. Current efforts are ultimately an oversimplification of the problem, and are too corporate aligned for their own good. This is likely to lead to public spending on private market research, as well as unnecessarily complex and redundant solutions that line the pockets of those already in power.[19]

Democracy Demands More

We risk converting democratisation into a symbolic gesture unless participation is tied to structural authority and ongoing adaptation. Only by embedding democratic purpose at the core, through enforceable institutions and continuous feedback, can AI development truly reflect collective power, not performative input.

We understand how complex and fraught with risk both regulating transformative technology, and innovating sociopolitical systems can be. We intend to conduct the first Odyssean Process in full, built of components with strong track records for robust and legitimate collective intelligence, in 2026. This piece serves as a call to action to proceed as boldly, but also as diligently, as possible. Ensuring democratic engagement on AI governance does what it says on the tin: empower those impacted by AI to contribute to pivotal regulatory efforts, not the other way around.

Acknowledgements

While I was the primary author of this blogpost, I would not have been able to write it without the significant assistance from Kendal Peirce and Giuseppe Dal Pra, as well as feedback from Max Ramsahoye.


Additional References

While not explicitly referenced in the text, the following sources were consulted and informed the writing this blogpost.

[20] Inside OpenAI's Plan to Make AI More 'Democratic', TIME

[21] CIP Annual Report 2024, The Collective Intelligence Project

[22] A Roadmap to Democratic AI, The Collective Intelligence Project

[23] AI Risk Prioritization: OpenAI Alignment Assembly Report, The Collective Intelligence Project

[24] Democratising AI: Multiple Meanings, Goals, and Methods, Seger et al.

[25] Integrating Artificial Intelligence into Citizens’ Assemblies: Benefits, Concerns and Future Pathways, McKinney

[26] Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits, Mun et al.

[27] Better Together? The Role of Explanations in Supporting Novices in Individual and Collective Deliberations about AI, Schmude et al.

[28] AGI and Democracy, Ash Center

[29] Series 4: Enabling Secure Democratic Ecosystems Through AI, AI4Democracy

[30] AI Democracy Projects, Institute for Advanced Study

[31] Launch of “Democratic Commons” The first global research program to build AI in service of Democracy, Sorbonne Université

[32] The Case for Local and Regional Public Engagement in Governing Artificial Intelligence, DemocracyNext

 

  1. ^

     AI Index Report 2024, Stanford Human-centered AI

  2. ^

    Comparing Major Companies' AI Spending in 2024 and the Challenge of Productionizing AI Solutions, AIM Councils

  3. ^

    Americans' top feeling about AI: caution, YouGov

  4. ^

    Do Americans think AI will have a positive or negative impact on society?, YouGov

  5. ^

    YouGov Survey: Artificial Intelligence, YouGov

  6. ^

    BRICS leaders to call for data protections against unauthorized AI use, Reuters

  7. ^

    Against "Democratizing AI", Johannes Himmelreich

  8. ^

    Democratic inputs to AI, OpenAI

  9. ^

    The Collective Intelligence Project Whitepaper, The Collective Intelligence Project

  10. ^

    Democratic inputs to AI grant program: lessons learned and implementation plans, OpenAI

  11. ^

    Bringing AI Participation Down to Scale: A Comment on Open AIs Democratic Inputs to AI Project, Moats and Ganguly

  12. ^

    Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task, Kosmyna et al.

  13. ^

    US Army appoints Palantir, Meta, OpenAI execs as Lt. Colonels, The Grayzone

  14. ^

    CIP and Anthropic launch Collective Constitutional AI, The Collective Intelligence Project

  15. ^

    Collective Constitutional AI: Aligning a Language Model with Public Input, Anthropic 

  16. ^

    Community Model Library - Original CCAI, Community Models (CIP) 

  17. ^

    The Odyssean Process Whitepaper, The Odyssean Institute

  18. ^

    A Ladder Of Citizen Participation, Arnstein

  19. ^

    Artificial Power: 2025 Landscape Report, AI Now Institute