My model of Eliezer thinks relatively carefully about most of his comms, but sometimes he gets triggered and says some things in ways that seem quite abrasive (like the linked EA Forum comment). I think this is a thing that somewhat inevitably happens when you are online a lot, and end up arguing with a lot of people who themselves are acting quite unreasonably.
Like, if you look at almost anyone who posts a lot online in contexts that aren't purely technical discussion, they almost all end up frequently snapping back at people. This is true of Gwern, Zvi, Buck, and to a lesser degree even Scott Alexander if you look at a bunch of his older writing, and most recently I can see even Kelsey Piper who has historically been extremely measured end up snapping back at people on Twitter in ways that suggests to me a lot of underlying agitation. I also do this not-too-infrequently.
I feel pretty confused about the degree to which this is just a necessary part of having conversations on the internet, or to what degree this is a predictable way people make mistakes. I am currently tending towards the former, but it seems like a hard question I would like to think more about.
I dispute that I frequently snap at people. I just read over my last hundred or so LessWrong comments and I don't think any of them are well characterized as snapping at someone. I definitely agree that I sometimes do this, but I think it's a pretty small minority of things I post. I think Eliezer's median level of obnoxious abrasive snappiness (in LessWrong comments over the last year) is about my 98th percentile.
I think that snapping back at people is most likely caused by belief that the person at which one snapped did something clearly stupid or hasn't bothered to do a basic search of related literature.
I'm not sure if his approach is actually productive for this, but for the longest time, the standard response to Eliezer's concerns was that they're crazy sci-fi. Now that they're not crazy sci-fi, the response is that they're obvious. Constantly reminding people that his crazy predictions were right (and everyone else was wrong in predictable ways) is a strategy to get people to actually take his future predictions seriously (even though they're obviously crazy sci-fi).
A bunch of points that are kind of the same point:
Some other factors that are relevant:
To be clear I think he could do a better job of understanding people he's writing with via text format, and I am still confused about why he seems (to me) below average at this.
I'm not aware of good reason to believe 1. 2 seems likely; MIRI has a number of different people working on its public communications, which I would expect to produce more conservative decisions than Eliezer alone, and which means that some of its communications will likely be authored by people less inclined to abrasiveness. (Also, I have the feeling that Eliezer's abrasive comments are often made in his personal capacity rather than qua MIRI representative, which I think makes them weaker evidence about the org).
The world needs all types of activism: from the firebrands to the bridge-builders. I too find his tone to be abrasive at times. He can be self-aggrandizing, pompous, and downright insulting. In my experience this is not uncommon for people (most typically men) who believe they’re the smartest person in the room.
Personally, I try to live by, “first, be kind.” I’ve found the most success with leading with empathy, but it’s not effective in every circumstance. Some people you can reach better by showing them our commonalities. Some people need to be shocked into thinking about what the implications of their beliefs are. Sometimes a sit-in is effective in bringing about needed change, other times it takes a riot.
As far as I understand, Eliezer is abrasive for these reasons:
As evidenced by him claiming that an approach is "Not obviously stupid on a very quick skim" and congratulating the author on eliciting a THAT positive review. Alas, I also have seen obviously stupid alignment-related ideas make their way at least to LessWrong.
However, it would be possible if the ASIs required OOMs more resources per token than humans. In this case applying the ASIs would be too expensive. Alas, this is unlikely.
IMO Eliezer also believes that the entire approach is totally useless. However, a case against this idea is found in comments mentioning Kokotajlo (e.g. mine)
I think Eliezer is just really rude and uninterested in behaving civilly, and has terrible intuitions about a wide variety of topics, especially topics related to how other people think or behave. And he substantially evaluates whether people are smart or reasonable based on how much they agree with him or respect him, and therefore writes off a lot of people and behaves contemptuously toward them. And he ends up surrounded by people who either hero worship him or understate their disagreements with him in order to get along with him—many of his co-workers would prefer he didn't act like an asshole on the internet, but they can't make that happen.
I think the core problem with Eliezer is that he spent his formative years arguing on the internet with people on listservs, most of whom were extremely unreasonable. And so he's used to the people around him being mostly idiots with incredibly stupid takes and very little value to add. So he is quite unused to changing his mind based on things other people say.
I don't think you should consider him to be rational with respect to this kind of decision. (I also don't think you should consider him to be rational when thinking about AI.)
I personally would not recommend financial support of MIRI, because I'm worried it will amplify net negative communications from him, and I'm worried that it will cause him to have more of an effect on discourse e.g. on LessWrong. I like and respect many MIRI staff, and I think they should work elsewhere and on projects other than amplifying Eliezer.
(Eliezer is pleasant and entertaining in person if you aren't talking about topics where he thinks your opinion is dumb. I've overall enjoyed interacting with him in person, and he's generally treated me kindly in person, and obviously I'm very grateful for the work he did putting the rationalist community together.)
I think you probably didn't read the moderation guidelines for this post:
Moderation Note: Please don't comment with "sides", eg. "Eliezer is [good]/[bad]", "people who find him abrasive are [right]/[wrong]".
This comment seems to me to straightforwardly violate them. To be clear, I am not saying the things you are saying here should not be said, it just seems like the author was trying to have a pretty different conversation (and my guess is the author is right that whatever macro conversation is going on here will go better if people follow these guidelines for now).
Oh, you're right, I didn't read those. Feel free to remove the comment or whatever you think is the right move.
Makes sense. I think I'll move it out of the answers into the comments but leave it around, but might delete it if it ends up dominating the rest of the conversation.
FWIW I almost missed the moderation guidelines for this post, it's rare that people actually edit them.
Fair enough! Agree it's not super widely used, but still seems like we should enforce it when people do use them.
it just seems like the author was trying to have a pretty different conversation
I think mostly in tone. If I imagine a somewhat less triggered intro sentence in Buck's comment, it seems to be straightforwardly motivating answers to the two questions at the end of OP:
1. None of Eliezer's public communication is -EV for AI Safety
2. Financial support of MIRI is likely to produce more consistently +EV communication than historically seen from Eliezer individually.
ETA: I do think the OP was trying to avoid spawning demon threads, which is a good impulse to have (especially when it comes to questions like this).
Eliezer is pleasant and entertaining in person if you aren't talking about topics where he thinks your opinion is dumb.
Except that I don't think that I understand the distinction between Eliezer believing that the opinion is dumb versus the opinion being actually dumb. The examples which I cite in my other comment are, in my opinion, close to the latter.
However, while Eliezer did concede in cases like this or that, I can't exclude the possibility that Eliezer ranted over something[1] which is verified to be not actually stupid. Alas, the two most prominent and highest-stakes examples, which are the difficulty of ASI alignment[2] and ASI takeoff speed[3], aren't yet resolved since mankind hasn't created any ASIs and measured the takeoff speeds.
The most prominent candidate of which I am aware is his comments to his most recent post and his likely failure to understand "how you could build something not-BS" on Cotra's estimates. I explained how Kokotajlo obtained a fairly good prediction, Eliezer criticized Cotra's entire methodology instead of the parameter choice.
For reference, I have compared SOTA research with kids' psychology and actually interesting research with aligning adults or even with solving problems like the AIs being the new proletariat.
Which Yudkowsky assumes to be very fast and the AI-2027 forecast assumes to be rather slow: the scenario had Agent-4 become adversarial in September 2027, solve mechinterp and create Agent-5 in November 2027; by June 2028 Agent-5 was thought to become wildly superintelligent.
I don't want to ruffle any feathers, but this has been bugging me for a while and has now become relevant to a decision since MIRI is fundraising and is focused on communication instead of research.
I love Eliezer's writing - the insight, the wit, the subversion. Over the years though, I've seen many comments from him that I found off-putting. Some of them, I've since decided, are probably net positive and I just happen to be in a subgroup that they don't work for (for example, I found Dying with Dignity discouraging, but saw enough comments that it had been helpful for people that I've changed my mind to think it was a net positive).
However, other comments are really difficult for me to rationalize. I just saw one recently on the EA forum to the effect that EAs who shortened their timelines only after chatGPT had the intelligence of a houseplant. I don't have any model of social dynamics by which making that statement publicly is plausibly +EV.
When I see these public dunks/brags, I experience cognitive dissonance, because my model of Eliezer is someone who is intelligent, rational, and aiming at using at least their public communications to increase the chance that AI goes well. I'm confident that he must have considered this criticism before, and I'd expect him to arrive at a rational policy after consideration. And yet, I see that when I recommend "If Anyone Builds It", people's social opinions of Eliezer affect their willingness to read/consider it.
I searched LW, and if it has been discussed before it is buried in all the other mentions of Eliezer. My questions are:
1. Does anyone know if there is some strategy here, or some model for why these abrasive statements are actually +EV for AI Safety?
2. Does MIRI in its communication strategy consider affective impact?
Phrased differently, are there good reasons to believe that:
1. None of Eliezer's public communication is -EV for AI Safety
2. Financial support of MIRI is likely to produce more consistently +EV communication than historically seen from Eliezer individually.
Note: I've intentionally not cited many examples here. I know that "abrasive" is subjective and am confident that many people don't have the same reaction. None of this is intended to put down Eliezer, for whom I have great respect.