Hello! I work at Lightcone and like LessWrong :-)

Wiki Contributions


I think a link up top to the questions would help a lot.

this narrative

Is this the narrative of "this person is using this as an excuse not to try more" or "this is a reason not to try more"?

That's plausible. The counter hope for the markets is that they are less "yay"/"boo" because the review is (hopefully) less "yay"/"boo".

Also, it will be less active in "Recent Discussion" soon; currently there's a bit of a backlog of eligible posts that it's getting triggered for.

I'll expand a little on how I use the concept.

People around me often say things like, "We have a new toy! The norm is to put away all the pieces when you're done playing with it" (or, more subtly, "I propose a norm of putting away all the pieces"), or "the Schelling time is tomorrow at noon", or "now it's common knowledge that I went to the shops [after saying 'I went to the shops']". I often feel a little upset at this use of language. And then it helps me to think of the speech act as narrative syncing. My hackles are still a little raised (at least by the 'norm' one), but I understand more that the narrative syncing is doing something valuable.

I felt a couple of times that this is trying to separate a combined thing into two concepts. But possibly it is in fact two things that feed into each in a loop. For example, in your Avatar example, I hear Katara as pretty clearly making an epistemic/prediction claim. But she is also inviting me in to believing in and helping Aang. Which makes sense: the fact that (she thinks) Aang is capable of saving the world is pretty relevant to whether I should believe in him.

And similarly, a lot of your post touches on how believing in things makes them more belief-worthy (i.e. probable)

Don't forget Edward Blyth for something in the vicinity of groundwater or on the scent.


Curated. It's nice to see a return to the problems of yore, and I think this is a nice incremental proposal. Bringing in causal counterfactuals seems like a neat trick (with lots of problems, as discussed in the post and the comments), and so does bringing in some bargaining theory.


I have lots of confusions and questions, like

so one general strategy the proposal fits into is “experiment with simpler utility functions (or other goal structures) to figure things out, and rely on corrigibility to make sure that we don’t die in the process of experimenting

doesn't make sense to me yet, as it seems easy for the utility functions / belief states to all prefer killing humans quickly, even if the humans don't affect the shutdown button exactly. Or the aside on bargaining with non-causally-counterfacting agents. But they're confusions and questions that afford some mulling, which is pretty cool!

Definitely seems tricky with real money prediction markets, rather than the play money I assume in second sentence

Shouldn't you just not let the players bet in their own markets? And even if they can, assuming the currency in the markets is only useful in the game (even if only for betting), I think it's not that helpful to bet your own market down (as it only gets resolved once you're out of the game).

No firm decisions made yet, but I think it's fairly likely that we'll do an AI and non-AI separation. If, for example, 40 of the top posts were AI, I think we would very likely do a top-25 non-AI posts or something like that.

Load More