I'm an admin of LessWrong. Here are a few things about me.
Randomly: If you ever want to talk to me about anything you like for an hour, I am happy to be paid $1k for an hour of doing that.
@dirk Anti-reacts aren't for disagreement, they're for "this is an inappropriate use of the react" (e.g. if someone writes "haha" on something that wasn't meant as a joke, or someone hits "typo" on something that is actually correctly spelled).
So please don't anti-react my "Plus One" react if you strongly disagree with it. You can just react to the claim with your epistemic state (as you have done with your disagree-react).
I'm not interested in making such a request for expanding on it, thanks for the offer. (I'm not asking you not to, to be clear.)
To respond to your point, you may be aware that there's a large class of Singerian EAs that are pathologically self-guilting and taking-personal-responsibility-for-the-bad-things-in-the-world, and it was kind to some of them to point out what was believed to be a true argument for why that was not the case here. I don't think it is primarily explained by self-serving motivation; and as evidence you can see from the comments that Eliezer was perfectly open to evidence he was mistaken (via encouraging Habryka to post their chat publicly where Habryka gave counterevidence), so I think it's unfair to read poor intent into this, as opposed to genuine empathy/sympathy for people who are renowned for beating themselves up about things in the world that they are barely responsible for and have relatively little agency over.
(Small suggestion of the call being recorded and transcript linked from a quick take or comment. I'm happy to pay for rev.com to make a transcription.)
As a relevant point, he also writes things like this where he tries to reduce EAs unnecessarily beating themselves up. (I disagree with him on the facts, but I think it was a kind thing to do.)
FWIW I almost missed the moderation guidelines for this post, it's rare that people actually edit them.
A bunch of points that are kind of the same point:
Some other factors that are relevant:
To be clear I think he could do a better job of understanding people he's writing with via text format, and I am still confused about why he seems (to me) below average at this.
Hm. I have been interpreting it as having more of a 'concerning!' element to it. More like when your arch-nemesis surprisingly moves into a house on the same street as you than when your true love does. Am I wrong?
(I’d say yes to Toby, who was a figurehead for the movement and cofounder GWWC, but no to Anders/Bostrom, who were far more removed politically and philosophically.)
I'm not sure I follow[1]. It's not a perfect match for the opposite ("Have fewer people take MIRI seriously") but it's roughly/functionally in the opposite direction in terms of their funding choices and influence on the discourse.
You may be responding to an earlier of edit of mine, I somewhat substantially edited within ~5 mins of commenting, and then found you'd already replied.
It’s actually been this way the whole time. When I first met Eliezer 10 years ago at a decision theory workshop at Cambridge University, I asked him what his AI timelines were over lunch; he promptly blew a raspberry as his answer and then fell asleep.