I'm an admin of LessWrong. Here are a few things about me.
Randomly: If you ever want to talk to me about anything you like for an hour, I am happy to be paid $1k for an hour of doing that.
A bunch of points that are kind of the same point:
Some other factors that are relevant:
To be clear I think he could do a better job of understanding people he's writing with via text format, and I am still confused about why he seems (to me) below average at this.
Hm. I have been interpreting it as having more of a 'concerning!' element to it. More like when your arch-nemesis surprisingly moves into a house on the same street as you than when your true love does. Am I wrong?
(I’d say yes to Toby, who was a figurehead for the movement and cofounder GWWC, but no to Anders/Bostrom, who were far more removed politically and philosophically.)
I'm not sure I follow[1]. It's not a perfect match for the opposite ("Have fewer people take MIRI seriously") but it's roughly/functionally in the opposite direction in terms of their funding choices and influence on the discourse.
You may be responding to an earlier of edit of mine, I somewhat substantially edited within ~5 mins of commenting, and then found you'd already replied.
Here's a link to the wiki page for Sinclair's razor, which also matches a comment I thought about writing here but didn't.
I definitely don't think that Open Phil thought of "have more people take MIRI seriously" as a core objective
FWIW I heard rumor they thought of the roughly opposite, "Have people think OpenPhil doesn't take MIRI seriously", as an objective. I heard a story that when OpenPhil staff went to academia to interview lots of academics about doing grantmaking in the field of AI, all the academics strongly dismissed MIRI as cranks and bad to associate with, and OpenPhil felt their credibility would be harmed by associating with MIRI.
This is consistent with (and somewhat supported by) the OpenPhil grant report to MIRI saying that they could've picked anywhere between $1.5M and $0.5M, and they picked the latter for signaling reasons.
Yeah but in the quote it links to his original statement on August 29th on LessWrong.
(I quoted the slightly later version on Cold Takes because it expands it from the original context to be a general statement.)
That’s the sixth one I announced in the OP.
In late 2022, Karnofsky wrote:
I don’t think we’re at the point of having much sense of how the hopes and challenges net out; the best I can do at this point is to say: “I don’t currently have much sympathy for someone who’s highly confident that AI takeover would or would not happen (that is, for anyone who thinks the odds of AI takeover … are under 10% or over 90%).”
I think this is later than what you're asking about; I also would guess that this was Karnofsky's private belief for a while before publishing, but I'm not sure at what time.
FWIW I almost missed the moderation guidelines for this post, it's rare that people actually edit them.