"This question will resolve in the negative to the dollar amount awarded"
This is a clear, unambiguous statement.
If we can't agree even on that, we have little hope of reaching any kind of satisfying conclusion here.
Further, if you're going to accuse me of making things up (I think this is, in this case, a violation of the sensible frontpage commenting guideline "If you disagree, try getting curious about what your partner is thinking") then I doubt it's worth it to continue this conversation.
Metaculus questions have a good track record of being resolved in a fair matter.
Do they? My experience has been the opposite. E.g. admins resolved "[Short Fuse] How much money will be awarded to Johnny Depp in his defamation suit against his ex-wife Amber Heard?" in an absurd manner* and refused to correct it when I followed up on it.
*they resolved it to something other than the amount awarded to Depp despite thatamount being the answer to the question and the correct resolution according to the resolution criteria
My comment wasn't well written, I shouldn't have used the word "complaining" in reference to what Said was doing. To clarify:
As I see it, there are two separate claims:
Said was just asking questions - but baked into his questions is the idea of the significance of the complaints, and this significance seems to be tied to claim 1.
Jefftk seems to be speaking about claim 2. So, his comment doesn't seem like a direct response to Said's comment, although the point is still a relevant one.
It didn't seem like Said was complaining about the reports being seen as evidence that it is worth figuring out whether thing could be better. Rather, he was complaining about them being used as evidence that things could be better.
It's probably worth noting that Yudkowsky did not really make the argument for AI risk in his article. He says that AI will literally kill everyone on Earth, and he gives an example of how it might do so, but he doesn't present a compelling argument for why it would.[0] He does not even mention orthogonality or instrumental convergence. I find it hard to blame these various internet figures who were unconvinced about AI risk upon reading the article.
[0] He does quote “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”
I'd prefer my comments to be judged simply by their content rather than have people's interpretation coloured by some badge. Presumably, the change is a part of trying to avoid death-by-pacifism, during an influx of users post-ChatGPT. I don't disagree with the motivation behind the change, I just dislike the change itself. I don't like being a second-class citizen. It's unfun. Karma is fun, "this user is below an arbitrary karma threshold" badges are not.
A badge placed on all new users for a set time would be fair. A badge placed on users with more than a certain amount of Karma could be fun. Current badge seems unfun - but perhaps I'm alone in thinking this.
Anybody else think it's dumb to have new user leaves beside users who have been here for years? I'm not a new user. It doesn't feel so nice to have a "this guy might not know what he's talking about" badge by my name.
Like, there's a good chance I'll never pass 100 karma, or whatever the threshold is. So I'll just have these leaves by my name forever?
To be clear, that it more-likely-than-not would want to kill everyone is the article's central assertion. "[Most likely] literally everyone on Earth will die" is the key point. Yes, he doesn't present a convincing argument for it, and that is my point.
How do I figure out the date for this?
I guess it's 9/11