You are probably not a good alignment researcher, and other blatant lies
When people talk about research ability, a common meme I keep hearing goes something like this: * Someone who would become a great alignment researcher will probably not be stopped by Confusions about a Thorny Technical Problem X that's Only Obvious to Someone Who Did the Right PhD. * Someone who would become a great alignment researcher will probably not have Extremely Hairy But Also Extremely Common Productivity Issue Y. * Someone who would become...great...would probably not have Insecurity Z that Everyone in the Audience Secretly Has. What is the point of telling anyone any of these? If I were being particularly uncharitable, I'd guess the most obvious explanation that it's some kind of barely-acceptable status play, kind of like the budget version of saying "Are you smarter than Paul Christiano? I didn't think so." Or maybe I'm feeling a bit more generous today so I'll think that it's Wittgenstein's Ruler, a convoluted call for help pointing out the insecurities that the said person cannot admit to themselves. But this is LessWrong and it's not customary to be so suspicious of people's motivations, so let's assume that it's just an honest and pithy way of communicating the boundaries of hard-to-articulate internal models. First of all, what model? Most people here believe some form of biodeterminism. That we are not born tabula rasa, that our genes influence the way we are, that the conditions in our mother's womb can and do often snowball into observable differences when we grow up. But the thing is, these facts do not constitute a useful causal model of reality. IQ, aka (a proxy for) the most important psychometric construct ever discovered and most often the single biggest predictor of outcomes in a vast number of human endeavours, is not a gears-level model. Huh? Suppose it were, and it were the sole determinant of performance in any mentally taxing field. Take two mathematicians with the exact same IQ. Can you tell me who would go on to become a
Set theory is the prototypical example I usually hear about. From Wikipedia: