Ben Pace

I'm an admin of this site; I work full-time on trying to help people on LessWrong refine the art of human rationality.

If you'd like to talk to talk with me about your experience of the site, and let me ask you questions about it, book a conversation with me here: I'm currently available Thursday mornings, US West Coast Time (Berkeley, California).

Ben Pace's Comments

2018 Review: Voting Results!

The whole point is that it does give you more points, I think.

2018 Review: Voting Results!

I can imagine, similar to how we have a button for 're-order the posts', we could have a button for 'normalise my votes'.

2018 Review: Voting Results!

It was 1st June 2018 that we built strong/weak upvotes - before then you had to always vote your max strength. I could imagine that being responsible for the apparent info-cascades in very popular post.

2018 Review: Voting Results!

Yeah! I also noticed this when looking over the results; there was a paragraph on it in the OP that I cut.

2018 Review: Voting Results!

You're quite right, fixed :)

Reason and Intuition in science
Ben Pace5dModerator Comment2

In this post the author gives someone's real name and claim that they're the author of the quoted paragraph. We got an intercom message from a user claiming to be that person, asking to remove the post given that (a) the post provides no evidence of the association, (b) they say this association is harmful to them, and (c) it now shows up as the fifth result on google when searching for their name.

Doxxing attempts, whether true or false, are pretty bad, and I do think that LW's SEO is giving this claim more Google prominence even though the post provides no evidence for the claim. I think in this case I will edit any mentions of the person's name here to be the rot-13'd version of the name. You can access the name via entering it into the website, but it will not be highly searchable on Google.

Coherent behaviour in the real world is an incoherent concept

Just a note that in the link that Wei Dai provides for "Relevant powerful agents will be highly optimized", Eliezer explicitly assigns '75%' to 'The probability that an agent that is cognitively powerful enough to be relevant to existential outcomes, will have been subject to strong, general optimization pressures.'

even if he doesn't it seems like a common implicit belief in the rationalist AI safety crowd and should be debunked anyway.


Go F*** Someone

Not offering a general opinion here right now, but I want to briefly respond to the particular phrasing of:

"Given that there is a wide variety of readers, are we sufficiently sure that this will not needlessly offend or upset some of them?"

As stated, this is far too costly of a standard. This is the internet, where an incredible magnitude of people can see your content, all with very idiosyncratic feelings and life stories, and the amount of work required to ensure zero readers will feel offended or upset is overwhelming and silencing.

Reality-Revealing and Reality-Masking Puzzles

I think that losing your faith in civilization adequacy does feel more like a deconversion experience. All your safety nets are falling, and I cannot promise you that we'll replace them all. The power that 'made things okay' is gone from the world.

Load More