Running Lightcone Infrastructure, which runs LessWrong. You can reach me at habryka@lesswrong.com
Dialogues are inherently LessWrong Docs because of the simultaneous editing features which we found pretty important for making things work.
(I cleaned up the formatting of this post a bit. There were some blockquotes that weren't proper blockquotes, and some lists that weren't proper bullet lists. Feel free to revert)
Yeah, I think we store theme settings in a cookie. You might just want to manually permit the cookie in the Brave settings (we intentionally don't do it server-side because many users want to have different settings for different devices, so doing it at the cookie level seems like the right call).
Hmm, my sense is Eliezer very rarely comments, and the people who do comment a lot don't have a ton of hero worship going on (like maybe Wentworth?). So I don't super believe that hiding usernames would do much about this.
I feel pretty frustrated at how rarely people actually bet or make quantitative predictions about existential risk from AI. EG my recent attempt to operationalize a bet with Nate went nowhere. Paul trying to get Eliezer to bet during the MIRI dialogues also went nowhere, or barely anywhere—I think they ended up making some random bet about how long an IMO challenge would take to be solved by AI. (feels pretty weak and unrelated to me. lame. but huge props to Paul for being so ready to bet, that made me take him a lot more seriously.)
For what it's worth, I would be up for a dialogue or some other context where I can make concrete predictions. I do think it's genuinely hard, since I do think there is a lot of masking of problems going on, and optimization pressure that makes problems harder to spot (both internally in AI systems and institutionally), so asking me to make predictions feels a bit like asking me to make predictions about FTX before it collapsed.
Like, yeah, I expect it to look great, until it explodes. Similarly I expect AI to look pretty great until it explodes. That seems like kind of a core part of the argument for difficulty for me.
I would nevertheless be happy to try to operationalize some bets, and still expect we would have lots of domains where we disagree, and would be happy to bet on those.
Not crazy. I also think doing things that are a bit more social where you have ways to recommend (or disrecommend) a post with less anonymity attached, allowing us to propagate that information further, is not crazy, though I am worried about that incentivizing more groupthinking and weird social dynamics.
(Note, I don't think it's because it was posted at the wrong time of day. I think it's because the opening doesn't make a clear case for why people should read it.
In my experience posts like this still get a decent amount of attention if they are good, but it takes a lot longer, since it spreads more by word-of-mouth. The initial attention burst of LW is pretty heavily determined by how much the opening paragraphs and title draw people in. I feel kind of sad about that, but also don't have a great alternative to the current HN-style algorithm that still does the other things we need karma/frontpage-sorting algorithm to do)
Yep, cross-posted works are eligible!
If something was created in 2022 but not crossposted, I am also happy to backdate it to include it in the review.
Huh, I am surprised by this. I agree this is a thing in lots of the internet, but do you have any examples? I feel like we really still have a culture of pretty extreme openness and taking random ideas seriously (enough that sometimes I feel like wild sounding bad ideas get upvoted too much because people like being contrarian a bit too much).