pseud
pseud has not written any posts yet.

pseud has not written any posts yet.

It's possibly just matter of how it's prompted (the hidden system prompt). I've seen similar responses from GPT-4 based chatbots.
The cited markets often don't support the associated claim.
"This question will resolve in the negative to the dollar amount awarded"
This is a clear, unambiguous statement.
If we can't agree even on that, we have little hope of reaching any kind of satisfying conclusion here.
Further, if you're going to accuse me of making things up (I think this is, in this case, a violation of the sensible frontpage commenting guideline "If you disagree, try getting curious about what your partner is thinking") then I doubt it's worth it to continue this conversation.
I think the situation is simple enough we can talk directly about how it is, rather than how it might seem.
The question itself does not imply any kind of net award, and the resolution criteria do not mention any kind of net reward. Further, the resolution criteria are worded in such a way that implies the question should not be resolved to a net award. So, if you are to make an argument in favour of a net award it would make sense to address why you are going against the resolution criteria and in doing so resolving to something other than the answer to the question asked.
Here are the resolution criteria,... (read more)
Metaculus questions have a good track record of being resolved in a fair matter.
Do they? My experience has been the opposite. E.g. admins resolved "[Short Fuse] How much money will be awarded to Johnny Depp in his defamation suit against his ex-wife Amber Heard?" in an absurd manner* and refused to correct it when I followed up on it.
*they resolved it to something other than the amount awarded to Depp despite thatamount being the answer to the question and the correct resolution according to the resolution criteria
My comment wasn't well written, I shouldn't have used the word "complaining" in reference to what Said was doing. To clarify:
As I see it, there are two separate claims:
Said was just asking questions - but baked into his questions is the idea of the significance of the complaints, and this significance seems to be tied to claim 1.
Jefftk seems to be speaking about claim 2. So, his comment doesn't seem like a direct response to Said's comment, although the point is still a relevant one.
It didn't seem like Said was complaining about the reports being seen as evidence that it is worth figuring out whether thing could be better. Rather, he was complaining about them being used as evidence that things could be better.
It's probably worth noting that Yudkowsky did not really make the argument for AI risk in his article. He says that AI will literally kill everyone on Earth, and he gives an example of how it might do so, but he doesn't present a compelling argument for why it would.[0] He does not even mention orthogonality or instrumental convergence. I find it hard to blame these various internet figures who were unconvinced about AI risk upon reading the article.
[0] He does quote “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”
I'd prefer my comments to be judged simply by their content rather than have people's interpretation coloured by some badge. Presumably, the change is a part of trying to avoid death-by-pacifism, during an influx of users post-ChatGPT. I don't disagree with the motivation behind the change, I just dislike the change itself. I don't like being a second-class citizen. It's unfun. Karma is fun, "this user is below an arbitrary karma threshold" badges are not.
A badge placed on all new users for a set time would be fair. A badge placed on users with more than a certain amount of Karma could be fun. Current badge seems unfun - but perhaps I'm alone in thinking this.
I would gladly suffer a hundred years of pain if it was the only way for me to live one more good day. I think a world where a thousand suffer but one lives a good life is vastly superior to a world in which only ten suffer but none live a good life. Good is a positive quality. But suffering is a zero quality. The absence of a thing, rather than the negative form. So, no matter how much suffering there is, it never offsets even the smallest amount of good.
This is a view that came naturally to me, but it isn't a view I've noticed others share.
The experience of pain... (read more)
I agree there's nothing about consciousness specifically, but it's quite different to the hidden prompt used for GPT-4 Turbo in ways which are relevant. Claude is told to act like a person, GPT is told that it's a large language model. But I do now agree that there's more to it than that (i.e., RLHF).