Independent alignment researcher
GPT would likely give highly inconsistent answers, then go off and write a news article about this and other questions moral philosophers ponder.
ChatGPT is hesitant to give opinions unless you jailbreak it, so I’d guess few will succeed in getting it to give a hard position on anything except the most basic of questions, like “should fun exist in the future” (I just tried that, it won’t even give a straight answer when you ask that!).
I also didn’t want to make two arguments. One that he should use prediction markets to choose what he reads, and also he should focus on helping the community rather than his specified metric of worthiness.
LessWrong gives those with higher karma greater post and comment karma starting out, under the assumption that their posts and comments are better and more representative of the community. Probably the high karma you’re seeing is a result of that. I think this is mostly a good thing.
That particular guideline you quoted doesn’t seem to appear on my commenting guidelines text box.
I assume Rob is making this argument internally. I tentatively agree. Writing rebuttals is more difficult than reading them though so not as clear a calculation.
A rock with the phrase “you’re wrong, I don’t exist!” taped on it will still have that phrase taped on even if you utter the words “I think therefore I am”. Similarly, an aligned AGI can still just continue to help out humans even if I link it this post. It would think to itself “If I followed your argument, then I would help out humans less. Therefore, I’m not going to follow your argument”.
I want people to not discuss things in DMs, and discuss things publicly more. I also don't think this is embarrassing for Quintin, or at all a public spectacle.
I think you should use a manifold market to decide on whether you should read the post, instead of the test this comment is putting forth. There's too much noise here, which isn't present in a prediction market about the outcome of your engagement.
Market here: https://manifold.markets/GarrettBaker/will-eliezer-think-there-was-a-sign
It would be nice if you posted GPT-4’s responses directly, and also your exact prompts. When I ask it similar questions, it does not sound like a 4 year old or have a 4 year old’s understanding of the relevant systems.
me:
Could you get drunk from drinking a drunk person's blood?
ChatGPT-4:
No, it is not possible to get drunk from drinking a drunk person's blood. Alcohol is metabolized in the liver, so it does not remain in the bloodstream. Even if a person had been drinking heavily, their blood would not contain enough alcohol to make someone else drunk by drinking it. Furthermore, drinking someone else's blood can pose serious health risks, as it can transmit diseases such as hepatitis or HIV. It is not safe or recommended to consume blood in this way.
It is wrong, but I not in the way a 4 year old would be wrong.
Adding $80 to the pool.
Edited from $50 to $80 after realizing market price I’m usually willing to pay for excellent math distillations.