Ben Pace

I'm an admin of this site; I work full-time on trying to help people on LessWrong refine the art of human rationality.

If you'd like to talk with me about your experience of the site, and let me ask you questions about it, book a conversation with me here: https://calendly.com/benapace. I'm currently available Thursday mornings, US West Coast Time (Berkeley, California).

Ben Pace's Comments

On R0

I appreciate the note about MealSquares :)

April Fools: Announcing LessWrong 3.0 – Now in VR!

I have to mention that Mozilla Room names get autogenerated when you make them. You can change them, but they pick the initial name. And the name of the room we built, a name we did not pick, was automatically selected to be "Expert Truthful Congregation". The kabbles are strong with this one, as Ray says.

April Fools: Announcing LessWrong 3.0 – Now in VR!

(I added your linked image to your comment.)

Hanson vs Mowshowitz LiveStream Debate: "Should we expose the youth to coronavirus?" (Mar 29th)

Feedback form if you'd like to fill it out: https://docs.google.com/forms/d/e/1FAIpQLSd5bgmdN3pGFiGZWCmwqzN6QA3jjVDELJ4x6KhpKZbQDHAH-A/viewform

[Update: New URL] Today's Online Meetup: We're Using Mozilla Hubs

Well, that sure was something.

Zvi and Robin did a great job hashing out the details of the policy proposal, and I appreciate them doing this so quickly (I contacted them on Tuesday). My thanks to the 5 or so people who joined the call to ask questions, and also to the 100 people who watched for the full 2 hours. (I was only expecting 40-80 people to even show up, so I am a bit surprised!)

The Mozilla Hubs experiment was an experiment. The first 20 minutes were hectic, with people asking all the usual questions you ask at parties like "CAN ANYONE HEAR ME!", "WHERE AM I?" and "Why is there a panda?", but after that it calmed down.

It was kinda awkward, there were no body language or visual cues to follow when you should speak in a group conversation, so there was a lot of silence. Eventually there were two rooms of 15-20 people in a big circle conversation, and it started getting pretty chill, and I had a good time for like an hour before leaving to cook pasta (thank you to the guy who shared an improved pasta recipe with us all, it made my lunch better). That said, we'll pick a different platform as the main one in future.

So yeah. I'm gonna reach out to people to do more debates, ping me if you have an idea for a conversation you want to have. Thanks all for coming :)

P.S. Feedback form if you'd like to fill it out: https://docs.google.com/forms/d/e/1FAIpQLSd5bgmdN3pGFiGZWCmwqzN6QA3jjVDELJ4x6KhpKZbQDHAH-A/viewform

Reminder: Blog Post Day II today!

Okay, I took a post out of my drafts and it's ready to post, and I commit to posting it. I've pinged a person for permission to quote them, and when they get back to me I'll hit publish.

Blog Post Day II

I have had a helluva day preparing for the debate+meetup tomorrow. I'll try to get something out before I go to bed, might be short, might be about covid, sorry about that.

Benito's Shortform Feed

Thinking more, I think there are good arguments for taking actions that as a by-product induce anthropic uncertainty; these are the standard hansonian situation where you build lots of ems of yourself to do bits of work then turn them off. 

But I still don't agree with the people in the situation you describe because they're optimising over their own epistemic state, I think they're morally wrong to do that. I'm totally fine with a law requiring future governments to rebuild you / an em of you and give you a nice life (perhaps as a trade for working harder today to ensure that the future world exists), but that's conceptually analogous to extending your life, and doesn't require causing you to believe false things. You know you'll be turned off and then later a copy of you will be turned on, there's no anthropic uncertainty, you're just going to get lots of valuable stuff.

Benito's Shortform Feed

I just don’t think it’s a good decision to make, regardless of the math. If I’m nearing the end of the universe, I prefer to spend all my compute instead maximising fun / searching for a way out. Trying to run simulations to make it so I no longer know if I’m about to die seems like a dumb use of compute. I can bear the thought of dying dude, there’s better uses of that compute. You’re not saving yourself, you’re just intentionally making yourself confused because you’re uncomfortable with the thought of death.

Benito's Shortform Feed

Now that's fun. I need to figure out some more stuff about measure, I don't quite get why some universes should be weighted more than others. But I think that sort of argument is probably a mistake - even if the lawful universes get more weighting for some reason, unless you also have reason to think that they don't make simulations, there's still loads of simulations within each of their lawful universes, setting the balance in favour of simulation again. 

Load More