Ben Pace

I'm an admin of this site; I work full-time on trying to help people on LessWrong refine the art of human rationality.

Longer bio:


AI Alignment Writing Day 2019
Transcript of Eric Weinstein / Peter Thiel Conversation
AI Alignment Writing Day 2018
Share Models, Not Beliefs

Wiki Contributions

Load More


When I un-check "Show low karma" it goes down to 3233.

I didn't have "Show Events" checked.


(You can check by seeing the numbers next to the load more button on the all-posts page for 2021.)

Woop! Pretty good results. A few of my +9s aren't in the top 50, but most of them are. And well done to Elephant Seal 2, ranking higher than Elephant Seal 1 did.

I generally like surveys! Here's a silly little survey I did that 100 people filled out, here's a survey John Wentworth did on people's technical background that 250 people that I know informs his writing. I think small surveys that directly answer key questions are very cheap and worthwhile.

It's important to do a good job on a survey that you try to make the schelling annual survey for ~10k people on the site to complete. One user made a mess of it in 2017 and the survey died (link, same link with different comments), and another user also didn't succeed in reviving in 2020 (link).

I think it'd be a nice-to-have to get an annual survey going, especially if it was run by someone who was trying to test particular hypotheses. For instance, if it were me, a bunch of questions on how users use the site that will help the LW team inform new feature development. 

So I think it's fine for you to do a basic demographics-and-beliefs survey, though I think it's a bit much to demand/expect everyone to take your survey. Calling it "General Census" is a demand that people should actually fill it out that you have to develop buy-in for. Maybe you get lucky and everyone actually fills it out, but if it doesn't then those who filled it out will be unhappy with you for making them spend effort on a stag hunt where they didn't get the stag, and also people will trust you less-than-baseline in the future for such stag hunts.

I don't want to block people from trying things (which is why I didn't try to in our brief PMs and shared you on the q's I'd gathered), nor am I freely endorsing any user who wants to run a survey that tries to take up 100s of hours of LW users.

I'm not certain, but I'm fairly confident I follow the structure of the argument and how it fits into the conversation. 

I don't mean to imply I achieved mastery myself from reading the passage, I'm saying that the writer seems to me (from this and other instances) to have a powerful understanding of the domain.

Fair enough. Nonetheless, I have had this experience many times with Eliezer, including when dialoguing with people with much more domain-experience than Scott.


Can you expand on sexual recombinant hill-climbing search vs. gradient descent relative to a loss function, keeping in mind that I'm very weak on my understanding of these kinds of algorithms and you might have to explain exactly why they're different in this way?


It's about the size of the information bottleneck. [followed by a 6 paragraph explanation]

It's sections like this that show me how many levels above me Eliezer is. When I read Scott's question I thought "I can see that these two algorithms are quite different but I don't have a good answer for how they're different", and then Eliezer not only had an answer, but a fully fleshed out mechanistic model of the crucial differences between the two that he could immediately explain clearly, succinctly, and persuasively, in 6 paragraphs. And he only spent 4 minutes writing it.

Load More