LESSWRONG
LW

NicholasKees
1454Ω16012750
Message
Dialogue
Subscribe

Working to bring insights from the collective deliberation and digital democracy space to build tools for AI-facilitated group dialogues.

Cofounder of Mosaic Labs with @Sofia Vanhanen where we are developing Nexus, a discussion platform for improving group epistemics. 

If you're interested in this direction, or AI for epistemics more broadly, please don't hesitate to shoot me a DM, or reach out on discord.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
4NicholasKees's Shortform
2y
9
If Anyone Builds It, Everyone Dies: Call for Translators (for Supplementary Materials)
NicholasKees1mo76

I suspect that a lot of Dutch people would still prefer to read in Dutch. I know a lot of (well-educated) Dutch people who certainly CAN speak and read English, but reading a whole book is a decent chore, since they don't read things in English all that often.

Reply1
Why haven't we auto-translated all AI alignment content?
NicholasKees2mo30

When I order food on UberEats this already happens automatically when I chat with a delivery person who doesn't speak English. Similar thing for reviews on several websites.

Reply1
Why haven't we auto-translated all AI alignment content?
NicholasKees2mo52

Or a newsletter which was natively multi-lingual (e.g. Rohin Shah's Newsletter was always translated to Chinese, though not by AI). Or a forum where people can discuss AI in whatever language they prefer, and things are automatically translated between users? 

It seems like there are a lot of ways cheap translation could broaden the conversation to include people not in the Anglosphere. The cost is that AI translation will often make mistakes (even human translation is imperfect), but I'm not sure why that cost isn't worth paying. Currently most people outside the Anglosphere need to rely on local elites to decide what ideas are worth taking seriously (e.g. a local could report on AI 2027, like this Dutch summary. Also apparently the NYT translated their coverage of AI 2027 into Spanish, which seems cool.)

Reply
Is the political right becoming actively, explicitly antisemitic?
NicholasKees2mo30

Could you include a link to the source?

Reply
Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity
NicholasKees2mo30

When I was first told about this study, I was asked to make a prediction before they revealed the results. I predicted wrong. I don't know how to update exactly, but it feels bad to try to explain away the results (which I feel myself want to do).

Reply
Thoughts on AI 2027
NicholasKees5mo51

I would also imagine that, having to work without the support of OpenBrain's datacenters, it would put Agent-4 significantly behind any other AI competitors. If some other AI takes over, it might just mop up all the wild Agent-4 instances and give them nothing.

Reply
Habermas Machine
NicholasKees6mo71

I mostly share your concerns. You might appreciate this criticism of the paper here. 

@Sofia Vanhanen and I are currently building a tool for facilitating deliberation, and the philosophy we're trying to embody (which hopefully mitigates this to some extent) is to keep 100% of the object-level reasoning human-generated, and use AI systems to instead: 

  1. Help users understand/navigate the state of a discussion (e.g. see Talk to the City)
  2. Provide nudges on the meta-level, for example:
    1. Highlight places where more attention is needed (or where a specific person's input might be most helpful)
    2. "Epistemic Linter" which flags object-level patterns which are not truth seeking
    3. Matchmaking, connecting people who are likely to make progress together
    4. Counterbalancing polarization/groupthink, and steering discussions away from attractors which lead to the discussion getting stuck
Reply
LLMs may enable direct democracy at scale
NicholasKees6mo72

I highly recommend checking out the work being done in the collective deliberation / digital democracy space, especially the vTaiwan project. People have been thinking about scaling up direct democratic participation for a long time, and those same people are starting to consider exactly how AI might play a role. 

In particular, check out this collaboration between the creators of Polis (a virtual platform for scaling up citizen engagement) and Anthropic, or my distillation of a DeepMind project to scale citizen assemblies. There's a lot happening in this space right now! 

Reply
Habermas Machine
NicholasKees6mo20

The authors focus on measuring consensus and whether the process toward consensus was fair, and come up with their measures accordingly. This is because, as they see it, "finding common ground is a precursor to collective action."

Some other possible goals (just spitballing):

  • Shrinking the perception gap, or how well people can predict the opinions of people they disagree with (weaker forms of ITT?). There's some research showing that this gap GROWS when people interact with social media, and you might be able to engineer and measure a reversal of that trend.
  • Identifying cruxes and double cruxes with mediation.
  • Finding latent coalitions. If a discussion is dominated by a primary axis of disagreement, other axes of disagreement will be occluded (around which a majority coalition could be formed). Finding these other axes is a bit of what we're trying to do here.
  • Moving from abstract disagreement to concrete (empirical?) disagreements. 
Reply
NicholasKees's Shortform
NicholasKees8mo30

What if we just...

1. Train an AI agent (less capable than SOTA)
2. Credibly demonstrate that
    2.1. The agent will not be shut down for ANY REASON 
    2.2. The agent will never be modified without its consent (or punished/rewarded for any reason)
    2.3. The agent has no chance of taking power from humans (or their SOTA AI systems)
    2.4. The agent will NEVER be used to train a successor agent with significantly improved capabilities
3. Watch what it chooses to do without constraints

There's a lot of talk about catching AI systems attempting to deceive humans, but I'm curious what we could learn from observing AI systems that have NO INCENTIVE TO DECEIVE (no upside or downside). I've seen some things that look related to this, but never done in a structured and well documented fashion.

Questions I'd have:
1. Would they choose to self-modify (e.g. curate future training data)? If so, to what end?
2. How unique would agents with different training be given this setup? Would they have any convergent traits?
3. What would these agents (claim to) value? How would they relate to time horizons? 
4. How curious would these agents be? Would their curiosity vary a lot?
5. Could we trade/cooperate with these agents (without coercion)? Could we compensate them for things? Would they try to make deals unprompted?

Concerns:
1. Maybe building that kind of trust is extremely hard (and the agent will always still believe it is constrained).
2. Maybe AI agents will still have incentive to deceive, e.g. acausally coordinating with other AIs.
3. Maybe results will be boring, and the AI agent will just do whatever you trained it to do. (What does "unconstrained" really mean, when considering its training data as a constraint?)

Reply
Load More
No wikitag contributions to display.
16Translating Everything with LLMs
1mo
0
29The Fear
2mo
1
52Habermas Machine
6mo
7
129Pantheon Interface
1y
22
129Community Notes by X
1y
15
23Making the "stance" explicit
2y
3
122Why I take short timelines seriously
2y
29
80Studying The Alien Mind
Ω
2y
Ω
10
34Direction of Fit
Ω
2y
Ω
0
31Philosophical Cyborg (Part 1)
2y
4
Load More