Ben Pace

I'm an admin of this site; I work full-time on trying to help people on LessWrong refine the art of human rationality.

Longer bio:


AI Alignment Writing Day 2019
Transcript of Eric Weinstein / Peter Thiel Conversation
AI Alignment Writing Day 2018
Share Models, Not Beliefs


The Book of HPMOR Fanfics

(You've no idea how hard it is to not scroll over things like that.)

Embedded Interactive Predictions on LessWrong

(That being said, I think this integration is awesome and kudos to everyone. Just keeping my priors sensible :) 

I do not endorse this as a way to end parentheticals! Grrr!

DanielFilan's Shortform Feed

(Which you get using option-m on a mac.)

AGI Predictions

”Catastrophic” is normally used in the term ”global catastrophic risk” and means something like “kills 100,000s of people”, so I do think “doesn’t necessarily kill but could’ve killed a couple of people” is a fairly different meaning. In retrospect I realize that I put my answer to the second question far too high — if it just means “a deceptive aligned system nearly gives a few people in hospital a fatal dosage but it’s stopped and we don’t know why the system messed up” then it’s quite plausible nothing this substantial will happen as a result of that.

Embedded Interactive Predictions on LessWrong

woop woop making predictions in posts is the way to go

AGI Predictions

Yeah, any LWer is welcome to record their predictions :)

AGI Predictions

I also noticed Daniel’s difference in probabilities there, and thought they were substantial. But it doesn’t seem unreasonable to me. The existing AI x-risk community has changed the global conversation on AI and also been responsible for much in the way of funding and direct research on many related technical problems. I could talk about the specific technical work, or the impact that things like the AI FOOM Debate had on Superintelligence had on OpenPhil, or CFAR on FLI on Musk on OpenAI. Or I could go into detail about the research being done on topics like Iterated Amplification and Agent Foundations and so on and ways that this seems to me to be clear progress on subproblems. I’m not sure exactly what alternatives you might have in mind.

Embedded Interactive Predictions on LessWrong

I think TurnTrout is a pretty good example of someone who did stuff when anyone could but nobody did :)

Covid 11/12: The Winds of Winter

I just saw this in recent discussion, just want to add a detail to Christian's comment, which is that you can dictate comment policy freely on Personal Blog; there are more standards for Frontpage posts.

Load More