You are not too "irrational" to know your preferences.
Epistemic Status: 13 years working as a therapist for a wide variety of populations, 5 of them working with rationalists and EA clients. 7 years teaching and directing at over 20 rationality camps and workshops. This is an extremely short and colloquially written form of points that could be expanded on to fill a book, and there is plenty of nuance to practically everything here, but I am extremely confident of the core points in this frame, and have used it to help many people break out of or avoid manipulative practices. TL;DR: Your wants and preferences are not invalidated by smarter or more “rational” people’s preferences. What feels good or bad to someone is not a monocausal result of how smart or stupid they are. Alternative titles to this post are "Two people are enough to form a cult" and "Red flags if dating rationalists," but this stuff extends beyond romance and far beyond LW-Rationalism. I saw forms of it as a college student among various intellectual subcultures. I saw forms of it growing up around non-intellectuals who still ascribed clear positives and negatives to the words "smart" and "stupid." I saw forms of it as a therapist working with people from a variety of nationalities. And of course, my various roles in the rationalist and EA communities have exposed me to a number of people who have been subject to some form of it from friends, romantic partners, or family... hell, most of the time I've heard it coming from someone's parents. What I'm here to argue against is, put simply, the notion that what feels good or bad to someone is a monocausal result of how smart or stupid they are. There are a lot of false beliefs downstream of that notion, but the main one I'm focusing on here is the idea that your wants or preferences might be invalid because someone "more rational" than you said so. Because while I've taught extensively about how to defend against "dark arts" emotional manipulation in a variety of flavors, I especially dislike seeing
There are a lot of things I can critique in this paper, but other people are doing that so I'm going to just bring up the bit I don't see others mentioning.
Where is the calculation for potential biotech advancements as an alternative for hitting the immortality event horizon in the next 20, 30, 40 years?
You meticulously model eight scenarios of safety progress rates, three discount rates, multiple CRRA parameters, safety testing POMDPs... but treat the single most reasonable alternative pathway to saving people's lives beside "build ASI as soon as possible and keep it in a box until it's safe" (?!) as a sensitivity check in Tables 10-11 rather than integrating it... (read more)