Often, people like that will respond well to criticism about X and Y but not about Z.
One (dark-artsy) aspect to add here is that the first time you ask somebody for criticism, you're managing more than your general identity, you're also managing your interaction norms with that person. You're giving them permission to criticize you (or sometimes, even think critically about you for the first time), creating common knowledge that there does exist a perspective from which it's okay/expected for them to do that. This is playing with the charity they normally extend to you, which might mean that your words and plans will be given less attention than before, even though there might not be any specific criticism in their head. This is especially relevant for low-legibility/fluid hierarchies, which might collapse and impede functioning from the resulting misalignment, perhaps not unlike your own fears of being "crushed", but at the org level.
Although it's usually clear that you'd want to get feedback rather than manage this (at least, I think so), it's important to notice as one kind of anxiety surrounding criticism. This is separate from any narcissistic worries about status, it can be a real systemic worry when you're acting prosocially.
Incidentally Eliezer, is this really worth your time?
This comment might have caused a tremendous loss of value, if Eliezer took Marcello's words seriously here and so stopped talking about his metaethics. As Luke points out here, despite all the ink spilled, very few seemed to have gotten the point (at least, from only reading him).
I've personally had to re-read it many times over, years apart even, and I'm still not sure I fully understand it. It's also been the most personally valuable sequence, the sole cause of significant fundamental updates. (The other sequences seemed mostly obvious --- which made them more suitable as just incredibly clear references, sometimes if only to send to others.)
I'm sad that there isn't more.
I've read/heard a lot about double crux but never had the opportunity to witness it.
EDIT: I did find one extensive example, but this would still be valuable since it was a live debate.
This one? From the CT-thesis section in A first lesson in meta-rationality.
the objection turns partly on the ambiguity of the terms “system” and “rationality.” These are necessarily vague, and I am not going to give precise definitions. However, by “system” I mean, roughly, a set of rules that can be printed in a book weighing less than ten kilograms, and which a person can consciously follow.11 If a person is an algorithm, it is probably an incomprehensibly vast one, which could not written concisely. It is probably also an incomprehensibly weird one, which one could not consciously follow accurately. I say “probably” because we don’t know much about how minds work, so we can’t be certain.
What we can be certain is that, because we don’t know how minds work, we can’t treat them as systems now. That is the case even if, when neuroscience progresses sufficiently, they might eventually be described that way. Even if God told us that “a human, reasoning meta-systematically, is just a system,” it would be useless in practice. Since we can’t now write out rules for meta-systematic reasoning in less than ten kilograms, we have to act, for now, as if meta-systematic reasoning is non-systematic.
Ideally, I'd make another ninja-edit that would retain the content in my post and the joke in your comment in a reflexive manner, but I am crap at strange loops.
Cold Hands Fallacy/Fake Momentum/Null-Affective Death StallAlthough Hot Hands has been the subject of enough controversy to perhaps no longer be termed a fallacy, there is a sense in which I've fooled myself before with a fake momentum. I mean when you change your strategy using a faulty bottomline: incorrectly updating on your current dynamic.
As a somewhat extreme but actual example from my own life: when filling out answersheets to multiple-choice questions (with negative marks for incorrect responses) as a kid, I'd sometimes get excited about having marked almost all of the questions near the end, and then completely, obviously, irrationally decide to mark them all. This was out of some completion urge, and the positive affect around having filled in most of them. This involved a fair bit of self-deception to carry out, since I was aware at some level that I left some of them previously unanswered because I was in fact unsure, and to mark them I had to feel sure.Now, for sure you could make the case that maybe there are times when you're thinking clearer and when you know the subject or whatever, where you can additionally infer this about yourself correctly and then rationally ramp up the confidence (even if slight) in yourself. But this wasn't one of those cases, it was the simple fact that I felt great about myself.Anyway the real point of this post is that there's a flipside (or straightforward generalization) of this: we can talk about this fake inertia for subjects at rest or at motion. What I mean is there's this similar tendency to not feel like doing something because you don't have that dynamic right now, hence all the clichés of the form "first blow is half the battle". In a sense, that's all I'm communicating here, but seeing it as a simple irrational mistake (as in the example above) really helped me get over this without drama: just remind yourself of the bottomline and start moving in the correct flow, ignoring the uncalibrated halo (or lack thereof) of emotion.
There's a whole section on voting in the LDT For Economists page on Arbital. Also see the one for analytic philosophers, which has a few other angles on voting.
From what I can tell from your other comments on this page, you might already have internalized all the relevant intuitions, but it might be useful anyway. Superrationality is also discussed.
Sidenote: I'm a little surprised no one else mentioned it already. Somehow arbital posts by Eliezer aren't considered as canon as the sequences, maybe it's the structure (rather than just the content)?
I usually call this lampshading, and I'll link this comment to explain what I mean. Thanks!
Thank you for this comment. I went through almost exactly the same thing, and might have possibly tabled it at the "I am really confused by this post" stage had I not seen someone well-known in the community struggle with and get through it.
My brain especially refused to read past the line that said "pushing it to 50% is like throwing away information": Why would throwing away information correspond to the magic number 50%?! Throwing away information brings you closer to maxent, so if true, what is it about the setup that makes 50% the unique solution, independent of the baseline and your estimate? That is, what is the question?
I think it's this: in a world where people can report the probability for a claim or the negation of it, what is the distribution of probability-reports you'd see?
By banning one side of it as Rafael does, you get it to tend informative. Anyway, this kind of thinking makes it seem like it's a fact about this flipping trick and not fundamental to probability theory. I wonder if there are more such tricks/actual psychology to adjust for to get a different answer.
While you're technically correct, I'd say it's still a little unfair (in the sense of connoting "haha you call yourself a rationalist how come you're failing at akrasia").
Two assumptions that can, I think you'll agree, take away from the force of "akrasia is epistemic failure":
...then on average you'd see akrasia over-represented in rationalists. Add to this the fact that akrasia itself makes manually aiming your rationality skills at what you want harder. That can leave it stable even under very persistent efforts.