Sounds interesting.
I don't have time to look at it. I'm responding anyway since you aren't getting other responses.
I wonder if you're not getting other responses because others haven't spent the time to look at it, either. Which raises the big challenge for many projects: getting people interested in using them.
What does this platform offer to users? How and to who will you pitch it?
My only substantive contribution is that I think the word debate doesn't capture what you're going for. I think debate now means people arguing in very bad faith and trying to score gotchas. Trying to make valid arguments and form correct conclusions is a different activity according to current widespread usage and practice of debate.
I've discovered LessWrong is one of the few corners of the internet that takes argument quality seriously as a terminal value. Here, discourse is "endothermic," as Sabien put it.
On other social media posts and comment sections, we often share opinions we've never even had to defend. I view this as an architectural problem on the Internet. To solve this, I'm building a debate platform called Agora to answer whether we can build an internet where opinions must be defended by default and good reasoning is rewarded.
I don't think our current political predicament is an accident. Neoliberalism has hollowed out our civic institutions while online platforms optimize for engagement over understanding. After reading Wendy Brown's "Undoing the Demos," I've been thinking about how to build grassroots community through civic action and rebuild a broken democracy. I've been particularly influenced by the works of Chantal Mouffe (Agonistic Pluralism), and Toulmin and Walton's argumentation models.
My testable claim is that the structural requirements of Agora do what the LessWrong culture and Sabien's Basics of Rationalist Discourse do, but for a general population without the LessWrong epistemic starting point.
I don't know if this is true. Agora could easily become a worse LessWrong for a politically-minded niche, or it could quietly fail at the issues it attempts to answer.
The site is live at debateagora.org. The design decisions below are where I'd value pushback.
Agora is a structured public debate platform for United States residents to read/write arguments on legislative proposals and general topics that deserve thoughtful discourse. Citizens engage with real legislation and contested ideas that shape their communities through a meritocratic system where prestige is earned by logical rigor instead of popularity. The platform is designed to produce the opposite of a social media comment section: slower, deeper, more accountable discourse.
You make a claim, back it with evidence, and explain why the evidence supports your conclusion. An AI coach named Vicara gives you private feedback on your argument, showing you the best version of your argument and the best version of your opponent's argument. It asks you whether your warrant flows from your evidence, whether the inferential step is coherent, and whether you semantically engage with the strongest version of the opposing position. I want the site to be a place where every person can take any question that matters to them, form a structured argument, and enter into dialogue with fellow residents who disagree. All the while, Vicara motivates better reasoning from behind, as a substrate instead of a critical-thinking wrapper.
A few more design decisions that may interest the community:
Vicara is built on Claude Sonnet 4.6, which has its own biases present in its training. I have no way of verifying the political priors of a closed-source LLM.
Chantal Mouffe's agonistic theory holds that genuine political conflict is irreducible: consensus isn't available and shouldn't be the goal. Agora's design assumes that better discourse can fundamentally shift understanding. I think that's the right bet. I hold it with real uncertainty.
Steel-manning is a scored metric. If the author substantively engages with the opposing argument before publishing, their argument gets a score and visibility boost. I know the model can be tricked by careful rhetoric, but I need users to test the limits of a semantic detection of "substantive engagement."
A Blind Read Mode strips tribal signals from the feed. Stance colors, political tradition labels, and author names are all hidden, leaving only the claim + argument text. Does removing these cues truly lead to "neutral" thinking, or is even chasing "neutral" thinking an illusory goal? Does clicking the toggle substantively change how you perceive an argument?
Vicara evaluates one argument in isolation. It can't tell if a cited source is real, credible, or substantively empty. Vicara currently makes no claims about "correctness" or truth, as there is no agreed-upon "ground truth" to calibrate against. Overall, I believe that source reliability is the realm of human thinking (though Adfontes and Media Check/Fact Check attempt it with their instruments). I think creating a rigid hierarchy of epistemology (with peer-reviewed authorship above personal testimony) would be detrimental to an open Agora. If human users engage properly with the cited sources and reply as to why the author ought to adjust their confidence in a source, then this issue is seemingly resolved. But this is a big "if."
I think about habryka's recent post where they write:
I want to keep the walled garden of Agora secure and institute proper moderation methods, which I am fleshing out now. I've instituted a "Contest" button for questioning argument validity and a "Flag" button for blatant misconduct. I would love some guiding insight.
At the moment, I fear Vicara's scoring rubric may be gameable, but I don't have a way to test this until I get more substantive users. At a large scale, Vicara's vulnerabilities are even more apparent, and the gaming problem gets even harder. Can you get a high Vicara score with a weak argument? Can the prestige metrics of Epistemic Weight and Civic Seals be gamed with coordination?
I'm not as well-versed in the Sequences as I'm sure many of you are. But I know the LessWrong community is the strongest body I can lean on to stress-test this site, add your own arguments, and question the site's decisions. Happy to share my GitHub repo with interested readers as well. If there is a missing aspect of rationality that could be implemented, I would love to engage with your critiques. If Vicara can be gamed, I'd love to see how and prevent it. I'm hoping the LessWrong community can probe the site's failure modes and introduce their own ideas to the platform.