Ben Pace

I'm an admin of LessWrong. Here are a few things about me.

  • I generally feel more hopeful about a situation when I understand it better.
  • I have signed no contracts nor made any agreements whose existence I cannot mention.
  • I believe it is good take responsibility for accurately and honestly informing people of what you believe in all conversations; and also good to cultivate an active recklessness for the social consequences of doing so.

(Longer bio.)

Sequences

AI Alignment Writing Day 2019
Transcript of Eric Weinstein / Peter Thiel Conversation
AI Alignment Writing Day 2018
Share Models, Not Beliefs

Wiki Contributions

Load More

Comments

Sorted by

It does seem worth having a term here! +4 for pointing it out and the attempt.

I've been re-reading tons of old posts for the review to remember them and see if they're worth nominating, and then writing a quick review if yes (and sometimes if no).

I've gotten my list of ~40 down to 3 long ones. If anyone wants to help out, here are some I'd appreciate someone re-reading and giving a quick review of in the next 2 days.

  1. To Predict What Happens, Ask What Happens by Zvi
  2. A case for AI alignment being difficult by Jessicata
  3. Alexander and Yudkowsky on AGI goals by Scott Alexander & Eliezer Yudkowsky

A great, short post. I think it retreads some similar ground that I aim to point at in A Sketch of Good Communication, and I think in at least one important regard it does much better. I give this +4.

I think the analogy in this post makes a great point v clearly, and improves upon the discussion of how those who control the flow of information mislead people. +4

I have various disagreements with some of the points in this post, and I don't think it adds enough new ideas to be strongly worthy of winning the annual review, but I am grateful to have read it, and for worthwhile topics it helps to retread the same ground in slightly different ways with some regularity. I will give this a +1 vote.

(As an example disagreement, there's a quote of a fictional character saying "There will be time enough for love and beauty and joy and family later. But first we must make the world safe for them." A contrary hypothesis I believe in more is that growing from children into adults involves bringing to life all parts of us that have been suffocated by Moloch, including many of these very powerful very human parts, and it is not good for these parts of us to be lost to the world until after the singularity.)

I like something about this post. It might just be the way it's setting up to save conversations that are going sideways. Anyway, I'd be interested to hear from the author how much use this post ended up getting. For now, I'll give it a positive vote in the review.

I re-read about 1/3rd of this while looking through posts to nominate. I think it's an account of someone who believes in truth-seeking, engaging with the messy political reality of an environment that cared about the ideals of truth-seeking far more than most other places on earth, and finding it to either fall short or sometimes betray those ideals. Personally I find a post like this quite helpful to ruminate on and read, to think about my ideals and how they can be played out in society.

I can't quickly tell if it is the right thing for the LW review or not, feeling a bit more like a personal diary, with a person telling their version of a complicated series of events, with all of the epistemic reliance of such an account (i.e. there will often be multiple other perspectives on what happened that I would want to hear before I am confident about what actually happened)... though with more care and aspirations to truth-seeking ideals than most people would care to put in or even know one could aspire to.

I'll think about it more later, but for now I'm giving it a positive vote to see it through the next phase of the review.

I feel more responsibility to be the person holding/tracking the earnest hypothesis in a 1-1 context, or if I am the only one speaking; in larger group contexts I tend to mostly ask "Is there a hypothesis here that isn't or likely won't be tracked unless I speak up" and then I mostly focus on adding hypotheses to track (or adding evidence that nobody else is adding).

I don't know how to quickly convey why I find this point so helpful, but I find this to be a helpful pointer to a key problem, and the post is quite short, and I hope someone else positively votes on it. +4.

Ben Pace13-1

I also believe that the data making EA+CEA looks bad is the causal reason why it was taken down. However, I want to add some slight nuance.

I want to contrast a model whereby Angelina Li did this while explicitly trying to stop CEA from looking bad, versus a model whereby she senses that something bad might be happening, she might be held responsible (e.g. within her organization / community), and is executing a move that she's learned is 'responsible' from the culture around her.

I think many people have learned to believe the reasoning step "If people believe bad things about my team I think are mistaken with the information I've given them, then I am responsible for not misinforming people, so I should take the information away, because it is irresponsible to cause people to have false beliefs". I think many well-intentioned people will say something like this, and that this is probably because of two reasons (borrowing from The Gervais Principle):

  1. This is a useful argument for powerful sociopaths to use when they are trying to suppress negative information about themselves.
  2. The clueless people below them in the hierarchy need to rationalize why they are following the orders of the sociopaths to prevent people from accessing information. The idea that they are 'acting responsibly' is much more palatable than the idea that they are trying to control people, so they willingly spread it and act in accordance with it.

A broader model I have is that there are many such inference-steps floating around the culture that well-intentioned people can accept as received wisdom, and they got there because sociopaths needed a cover for their bad behavior and the clueless people wanted reasons to feel good about their behavior; and that each of these adversarially optimized inference-steps need to be fought and destroyed.

Load More