Mass_Driver, thank you so much for taking the time to explain your concerns. I've been a lurker on LessWrong for a long time, and while I don't have any credentials in AI, I've spent enough time observing trends in the EA/Rationalist sphere to have some concerns about "intellectual inbreeding" as it were. The vague "holistic" criteria you describe for awarding scarce resources seem like a pretty typical way for a relatively homogeneous elite to gatekeep while providing plausible deniability.
I'm a pretty low-functioning EA (white, male, nerdy, socially awkward with normies, pleased with the idea I can have high impact by just donating without having to do much legwork myself) so I have absolutely no clue how to fix this problem. I'm only breaking my long silence on LessWrong to make it clear to people with real power that it looks really bad to me, a person on the outskirts, when you give less attention to this sequence than to yet another effort to elicit misbehavior in some reasoning model.
EDIT: Not that I think the efforts to elicit misbehavior in a reasoning model are unimportant! Rather, I just think the OP has given some pretty compelling arguments for why we're a lot closer to saturation on such efforts than we are on AI governance work.
Mass_Driver, thank you so much for taking the time to explain your concerns. I've been a lurker on LessWrong for a long time, and while I don't have any credentials in AI, I've spent enough time observing trends in the EA/Rationalist sphere to have some concerns about "intellectual inbreeding" as it were. The vague "holistic" criteria you describe for awarding scarce resources seem like a pretty typical way for a relatively homogeneous elite to gatekeep while providing plausible deniability.
I'm a pretty low-functioning EA (white, male, nerdy, socially awkward with normies, pleased with the idea I can have high impact by just donating without having to do much legwork myself) so I have absolutely no clue how to fix this problem. I'm only breaking my long silence on LessWrong to make it clear to people with real power that it looks really bad to me, a person on the outskirts, when you give less attention to this sequence than to yet another effort to elicit misbehavior in some reasoning model.
EDIT: Not that I think the efforts to elicit misbehavior in a reasoning model are unimportant! Rather, I just think the OP has given some pretty compelling arguments for why we're a lot closer to saturation on such efforts than we are on AI governance work.