Ben Pace

I'm an admin of LessWrong. Here are a few things about me.

  • I generally feel more hopeful about a situation when I understand it better.
  • I have signed no contracts nor made any agreements whose existence I cannot mention.
  • I believe it is good take responsibility for accurately and honestly informing people of what you believe in all conversations; and also good to cultivate an active recklessness for the social consequences of doing so.

(Longer bio.)

Sequences

AI Alignment Writing Day 2019
Transcript of Eric Weinstein / Peter Thiel Conversation
AI Alignment Writing Day 2018
Share Models, Not Beliefs

Wiki Contributions

Load More

Comments

Sorted by

Thanks again. 

I am currently holding a rough hypothesis of "when someone is interested in exploring psychosis and psychedelics, they become more interested in Michael Vassar's ideas", in that the former causes the latter, rather than the other way around.

Is that a claim of this post? It's a long post so I might be forgetting a place where Zvi writes that, but I think most of the relevant parts of this book review are about how MacAskill and EAs are partly responsible for empowering Sam Bankman-Fried, for supporting him with great talent and trust with funders and a positive public image.

Thanks for answering; good to hear that you don't think you've had any severe or long-lasting consequences (though it sounds like one time LSD was a contributor to your episode of bad mental health).

I guess here's other question that seems natural: it's been said that some people take LSD on either the personal advice of Michael Vassar, or otherwise as a result of reading/discussing his ideas. Are either of those true for you?

I am somewhat confused about this.

To be clear I am pro people from organizations I think are corrupt showing up to defend themselves, so I would upvote it if it had like 20 karma or less.

I would point out that the comments criticizing the organization’s behavior and character are getting similar vote levels (e.g. top comment calls OpenAI reckless and unwise and 185 karma and 119 agree-vote).

Yes, there is, I’ll get the post up today.

...did you try to 'induce psychosis' in yourself by taking psychedelics? If so I would also ask about how much you took and if you had any severe or long-lasting consequences.

+9. This is at times hilarious, at times upsetting story, of how a man gained a massive amount of power and built a corrupt empire. It's a psychological study, as well as a tale of a crime, hand-in-hand with a lot of naive ideologues.

I think it is worthwhile for understanding a lot about how the world currently works, including understanding individuals with great potential for harm, the crooked cryptocurrency industry, and the sorts of nerds in the world who falsely act in the name of good.

I don't believe that all the details here are fully accurate, but that enough of it is to be a worthy story to read.

(It is personally upsetting to me that the person who was ~King over me and everyone I knew professionally and personally turned out to be such a spiritually-hollow crook, and to know how close I am to being in a world where his reign continues.)

I think that someone reading this would be challenged to figure out for themselves what assumptions they think are justified in good discourse, and would fix some possible bad advice they took from reading Sabien's post. I give this a +4.

(Below is a not especially focused discussion of some points raised; perhaps after I've done more reviews I can come back and tighten this up.)

Sabien's Fifth guideline is "Aim for convergence on truth, and behave as if your interlocutors are also aiming for convergence on truth."

My guess is that the idea that motivates Sabien's Fifth Guideline is something like "Assume by-default that people are contributing to the discourse in order to share true information and strong arguments, rather than posing as doing that while sharing arguments they don't believe or false information in order to win", out of a sense that there is indeed enough basic trust to realize this as an equilibrium, and also a sense that this is one of the ~best equilibriums for public discourse to be in.

One thing this post argues is that a person's motives are of little interest when one can assess their arguments. Argument screens off authority and many other things too. So we don't need to make these assumptions about people's motives. 

There's a sense in which I buy that, and yet also a sense in which the epistemic environment I'm in matters. Consider two possibilities:

  • I'm in an environment of people aspiring to "make true and accurate contributions to the discourse" but who are making many mistakes/failing.
  • I'm in an environment of people who are primarily sharing arguments and evidence filtered to sound convincing for positions that are convenient to them, and are pretending to be sort of people described in the first one.

I anticipate very different kinds of discussions, traps, and epistemic defenses I'll want to have in the two environments, and I do want to treat the individuals differently.

I think there is a sense in which I can just focus on local validity and evaluating the strength of arguments, and that this is generally more resilient to whatever the particular motives are of the people in the local environment, but my guess is that I should still relate to people and their arguments differently, and invest in different explanations or different incentives or different kinds of comment thread behavior.

I also think this provides good pushbacks on some possible behaviors people might take away from Sabien's fifth guideline. (I don't think that this post correctly understands what Sabien is going for, but I think bringing up reasonable hypotheses and showing why they don't make sense is helpful for people's understanding of how to participate well in discourse.)

Simplifying a bit, this is another entry in the long-running discourse on how adversarial one should model individuals in public discourse as, and what assumptions to make about other people's motives, and I think this provides useful arguments about that topic.

I give this a +9, one of the most useful posts of the year.

I think that a lot of these are pretty non-obvious guidelines that make sense when explained, and I continue to put effort in to practicing them. Separating observations and inferences is pro-social, making falsifiable claims is pro-social, etc.

I like this document both for carefully condensing the core ideas into 10 short guidelines, and also having longer explanations for those who want to engage with them.

I like that it’s phrased as guidelines rather than rules/norms. I do break these from time to time and endorse it.

I don't agree with everything, this is not an endorsement, I have many nuances and different phrasings and different standards, but I think this is a worthwhile document for people to study, especially those approaching this sort of discourse for the first time, and it's v well-written.

It's a fine post, but I don't love this set of recommendations and justifications, and I feel like rationalist norms & advice should be held to a high standard, so I'm not upvoting it in the review. I'll give some quick pointers to why I don't love it.

  1. Truth-Seeking: Seems too obvious to be useful advice. Also I disagree with the subpoint about never treating arguments like soldiers, I think two people inhabiting opposing debate-partners is sort of captured by this and I think this is a healthy truth-seeking process.
  2. Non-Violence: All the examples of things you're not supposed to do in response to an argument are things you're not supposed to do anyway. Also it seems too much like it's implying the only response to an argument is a counter-argument. Sometimes the correct response to bad argument is to fire someone or attempt to politically disempower them. As an example, Zvi Mowshowitz presents evidence and argument in Repeal the Jones Act of 1920 that there are a lot of terrible and disingenuous arguments being put forward by unions that are causing a total destruction of the US shipping industry. The generator here of arguments seems reliably non-truth-tracking, and I would approve of someone repealing the Jones act without persuading such folks or spending the time to refute each and every argument.
  3. Non-Deception: I'll quote the full description here:
    1. "Never try to steer your conversation partners (or onlookers) toward having falser models. Where possible, avoid saying stuff that you expect to lower the net belief accuracy of the average reader; or failing that, at least flag that you're worried about this happening."
    2. I think that the space of models one walks through is selected for both accuracy and usefulness. Not all models are equally useful. I might steer someone from a perfectly true but vacuous model, to a less perfect but more practical model, thereby net reducing the accuracy of a person's statements and beliefs (most of the time). I prefer something more like a standard of "Intent to Inform".

Various other ones are better, some are vague, many things are presented without justification and I suspect I might disagree if it was offered. I think Zack M. Davis's critique of 'goodwill' is good.

Load More