LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.


Feedbackloop-First Rationality
The Coordination Frontier
Privacy Practices
The LessWrong Review
Keep your beliefs cruxy and your frames explicit
LW Open Source Guide
Tensions in Truthseeking
Project Hufflepuff
Rational Ritual
Load More (9/10)



Yeah. This prompts me to make a brief version of a post I'd had on my TODO list for awhile:

"In the 21st century, being quick and competent at 'orienting' is one of the most important skills." 

(in the OODA Loop sense, i.e. observe -> orient -> decide -> act)

We don't know exactly what's coming with AI or other technologies, we can make plans informed by our best-guesses, but we should be on the lookout for things that should prompt some kind of strategic orientation. @jacobjacob has helped prioritize noticing things like "LLMs are pretty soon going to be affect the strategic landscape, we should be ready to take advantage of the technology and/or respond to a world where other people are doing that."

I like Robert's comment here because it feels skillful at noticing a subtle thing that is happening, and promoting it to strategic attention. The object-level observation seems important and I hope people in the AI landscape get good at this sort of noticing.

It also feels kinda related to the original context of OODA-looping, which was about fighter pilots dogfighting. One of the skills was "get inside of the enemy's OODA loop and disrupt their ability to orient." If this were intentional on OpenAI's part (or part of subconscious strategy), it'd be a kinda clever attempt to disrupt our observation step.


I agree with this overall point, although I think "trade secrets" in the domain of AI can be relevant for people having surprising timelines views that they can't talk about.


Ah yeah, that actually seems like maybe a good format given that the event-in-question I'm preparing for is "a blogging festival". There is trouble with (one of my goals) being "make something that makes for an interesting in-person-event" (we sorta made our jobs hard by framing an in-person-event around blogging, although I think something like "get two attendees to do this sort of debate framework beforehand, and then maybe have an interviewer/facilitator have a "takeaways discussion panel" might be good)

Copying the text here for convenience:

Here's a debate protocol that I'd like to try. Both participants independently write statements of up to 10K words and send them to each other at the same time. (This can be done through an intermediary, to make sure both statements are sent before either is received.) Then they take a day to revise their statements, fixing the uncovered weak points and preemptively attacking the other's weak points, and send them to each other again. This continues for multiple rounds, until both participants feel they have expressed their position well and don't need to revise more, reaching a kind of Nash equilibrium. Then the final revisions of both statements are released to the public, side by side.

Note that in this kind of debate the participants don't try to change each other's mind. They just try to write something that will eventually sway the public. But they know that if they write wrong stuff that the other side can easily disprove, they won't sway the public. So only the best arguments remain, within the size limit.


New concept for my "qualia-first calibration" app idea that I just crystallized. The following are all the same "type":

1. "this feels 10% likely"

2. "this feels 90% likely"

3. "this feels exciting!"

4. "this feels confusing :("

5. "this is coding related"

6. "this is gaming related"

All of them are a thing you can track: "when I observe this, my predictions turn out to come true N% of the time".

Numerical-probabilities are merely a special case (tho it still gets additional tooling, since they're easier to visualize graphs and calculate brier scores for)

And then a major goal of the app is to come up with good UI to help you visualize and compare results for the "non-numeric-qualia".

Depending on circumstances, it might seem way more important to your prior "this feels confusing" than "this feels 90% likely". (I'm guessing there is some actual conceptual/mathy work that would need doing to build the mature version of this)


"Can we build a better Public Doublecrux?"

Something I'd like to try at LessOnline is to somehow iterate on the "Public Doublecrux" format.

Public Doublecrux is a more truthseeking oriented version of Public Debate. (The goal of a debate is to change your opponent's mind or the public's mind. The goal of a doublecrux is more like "work with your partner to figure out if you should change your mind, and vice vera")

Reasons to want to do _public_ doublecrux include:

  • it helps showcase subtle mental moves that are hard to write down explicitly (i.e. tacit knowledge transfer)
  • there's still something good and exciting about seeing high profile smart people talk about ideas. Having some variant of that format seems good for LessOnline. And having at least 1-2 "doublecruxes" rather than "debates" or "panels" or "interviews" seems good for culture setting.

Historically I think public doublecruxes have had some problems:

  • two people actually changing *their* minds tend to get into idiosyncratic frames that are hard for observers to understand. You're chasing *your* cruxes, rather than presenting "generally compelling arguments." This tends to get into weeds and go down rabbit holes
  • – having the audience there makes it a bit more awkward and performative.



With that in mind, here are some ideas:

  • Maybe have the double cruxers in a private room, with videocameras. The talk is broadcast live to other conference-goers, but the actual chat is in a nice cozy room.
  • Have _two_ (or three?) dedicated facilitators. One is in the room with the doublecruxers, focused on helping them steer towards useful questions. (this has been tried before seems to go well if the facilitator prepares). The SECOND (and maybe third) facilitator hangs out with the audience outside, and is focused on tracking "what is the audience confused about?". The audience participates in a live google doc where they're organizing the conversational threads and asking questions.

    (the first facilitator is periodically surreptitiously checking the google doc or chat and sometimes asking the Doublecruxers questions about it)
  • it's possibly worth investing in developing a doublcrux process that's explicitly optimized for public consumption. This might be as simple as having the facilitator periodically asking participants to recap the open threads, what the goal of the current rabbit hole is, etc. But, like, brainstorming and doing "user tests" of it might be worthwhile.


Anyway those are some thoughts for now. Curious if anyone's got takes.


So there's "being honest" and "trying to convince people of things you think are true", and I think those are at least somewhat different projects. I feel like the first is more obviously good than the second.

I would first ask "what's my goal" (and, doublecheck why it's your goal and if you're being honest with yourself). Like, "I want to be able to say my true thoughts out loud and have an honest open relationship with my relatives" is different from "i don't want my relatives to believe false things" (the win-condition for the former is about you, the latter is about them). The latter is subtly different from "I want to have presented my best case to them, that they'll actually listen to, but then let them make up their own mind."

I'd also note there are additional soft skills you can gain like:

  • feeling safe/nonjudgmental to talk to
  • making it feel safe for people to give up ideology (via living-through-example as someone who is happy without being religious)
  • helping people grieve/orient

Young people (metaphorically or literally) are welcome! 


Are the disagree reacts with ‘small icons are good for this reason (enough to override other concerns)’ or ‘I didn’t update previously?’

Load More