I'm trying to think of ways to distinguish "AI drove them crazy" from "AI directed their pre-existing crazy towards LW".
I mean, the friend probably has your best interest in mind, "you'l be glad you jumped"
My experience is that people feel like they're saying it for your own good, but it's not like they're carefully running simulations of everything they know about you and coming to a rigorous conclusion. They're running primarily on script-following, projection, and heuristics, and even if they weren't, you have more information about yourself in that moment than they do.
Before trusting the rat data, I'd want to investigate how similar their reaction to MDMA is to humans. When I did the comparison with ketamine I found out there is no known anesthetic dose of ketamine in mice and rats, you always combine it with another drug, which makes me think rats+mice are not useful models of ketamine's effects in humans.
Not that I found in 2018, at least.
Sure, but "is my friend upset" is very different than "is the sum total of all the positive and negative effects of this, from first order until infinite order, positive"
Mm, I think sometimes I'd rather judge on the standard of whether the outcome is good, rather than exclusively on the rules of behavior.
This made my blood go cold, despite thinking it would be good if Said left LessWrong.
My first thoughts when I read "judge on the standard of whether the outcome is good" is that this lets you cherrypick your favorite outcomes without justifying them. My second is that it knowing if something is good can be very complicated even after the fact, so predicting it ahead of time is challenging even if you are perfectly neutral.
I think it's good LessWrong('s admins) allows authors to moderate their own posts (and I've used that to ban Said from my own posts). I think it's good LessWrong mostly doesn't allow explicit insults (and wish this was applied more strongly). I think it's good LessWrong evaluates commenting patterns, not just individual comments. But "nothing that makes authors feel bad about bans" is way too far.
On the other hand, we should expect that the first people to speak out against someone will be the most easily activated (in a neurological sense)- because of past trauma, or additional issues with the focal person, or having a shitty year. Speaking out is partially a function of pain level, and pain(Legitimate grievance + illegitimate grievance) > pain(legitimate grievance). It doesn't mean there isn't a legitimate grievance large enough to merit concern.
This seems like an argument for deleting TurnTrout's post, but not the original comment, which was on topic.
Does it matter if it's a conscious strategy? His internal experience might be "this is dumb and annoying", but unless that's uncorrelated with how a post reflects on him, the effect is going to be distorting the information people present about him in his posts.
Huh, METR finds AI tools slow devs down even though they feel sped up.