How do we distinguish between Inner Rings and Groups of Sound Craftsmen?
The essay's answer to this is solid, and has steered me well:
In any wholesome group of people which holds together for a good purpose, the exclusions are in a sense accidental. Three or four people who are together for the sake of some piece of work exclude others because there is work only for so many or because the others can’t in fact do it. Your little musical group limits its numbers because the rooms they meet in are only so big. But your genuine Inner Ring exists for exclusion. There’d be no fun if there were no outsiders. The invisible line would have no meaning unless most people were on the wrong side of it. Exclusion is no accident; it is the essence.
My own experience supports this being the crucial difference. I've encountered a few groups where the exclusion is the main purpose of the group, *and* the exclusion is based on reasonably good judgments of competence. These groups strike me as pathological and corrupting in the way that Lewis describes. I've also encountered many groups where exclusion is only "accidental", and also the people are very bad at judging competence. These groups certainly have their problems, but they don't have the particular issues that Lewis describes.
I've lived in both rationalist and non-rationalist group houses and observed a bunch more. In my experience, there are special upsides and downsides that come with ideological/subcultural group houses that you won't find in e.g. a house formed by a regular friend group or a bunch of people thrown together by Craigslist ads. Those features appear pretty similar whether the subculture is rationalists or animal rights activists or an artistic scene or whatever, and I've seen stories similar to the OP's from several different subcultures. I think communities like these are net positive overall and I expect I'll be living in group houses in the future, but some people absolutely do get burned and it's worth being especially careful because of how entangled the social scene is, even beyond the regular roommate issues.
What's worked for me isn't focusing on agency per se. I've had more success from focusing on my deeper desires (for which "agency" is often instrumental) and figuring out how to get them. Sometimes those plans run into psychological barriers. When that happens, I'll do whatever it takes to overcome or dissolve those barriers—rationality techniques, therapy techniques, pure willpower, esoteric philosophy, etc etc. After repeating this a bunch I ended up more proactive than before because there were fewer mental barriers between me and taking the "agenty" action when it happened to be a good idea.
Like, I wasn't thinking "I should be more agenty, I'll go [organize a speaker series | raise tens of thousands of dollars for weirdo projects | change my interpersonal demeanor | solve an intellectual problem that no one I know can answer] to practice agency." Rather, I found myself in situations where things like that were good ways to get what I wanted but I was too averse to actually do it, then wrestled with my soul until I could do it anyway. (Sometimes this step takes two hours, sometimes it takes six months.) Each step unlocked more of a general willingness to do similar things, not just the narrow ability to do that one thing.
Of the people I know who seriously follow an approach like this for at least a couple years, about 50% wind up notably more effective than their peers and about 10% wind up insane.
In general I approve of the impulse to copy social technology from functional parts of society, but I really don't think contemporary academia should be copied by default. Frankly I think this site has a much healthier epistemic environment than you see in most academic communities that study similar subjects. For example, a random LW post with >75 points is *much* less likely to have an embarrassingly obvious crippling flaw in its core argument, compared to a random study in a peer-reviewed psychology journal.
Anonymous reviews in particular strike me as a terrible idea. Bureaucratic "peer review" in its current form is relatively recent for academia, and some of academia's most productive periods were eras where critiques came with names attached, e.g. the physicists of the early 20th century, or the Republic of Letters. I don't think the era of Elsevier journals with anonymous reviewers is an improvement—too much unaccountable bureaucracy, too much room for hidden politicking, not enough of the purifying fire of public argument.
If someone is worried about repercussions, which I doubt happens very often, then I think a better solution is to use a new pseudonym. (This isn't the reason I posted my critique of an FHI paper under the "David Hornbein" pseudonym rather than under my real name, but it remains a proof of possibility.)
Some of these ideas seem worth adopting on their merits, maybe with minor tweaks, but I don't think we should adopt anything *because* it's what academics do.
In the spirit of "how could this post be improved, such that it makes sense to include in a 'Best Of', or otherwise enter into Lesswrong's longterm memory", my suggestion would be "publish an summary version which is just an abridgment of the current piece's introduction plus maaaybe a few selected paragraphs from deeper in, probably no need to bother writing any new words."
For this reason, I wouldn't want this post included in the 2019 highlights. I just looked at this for the review, and the part which some people report finding useful is in the brief description of the concept at the very beginning. The bulk of the post is a freeform, rambling exploration of the concept and its implications which I mostly couldn't bring myself to focus on; this exploratory style seems totally appropriate for a personal blog post, but it's not the sort of thing I'd want to read if I were looking back at a curated list of the best stuff from 2019.
As has been mentioned elsewhere, this is a crushingly well-argued piece of philosophy of language and its relation to reasoning. I will say this post strikes me as somewhat longer than it needs to be, but that's also my opinion on much of the Sequences, so it is at least traditional.
Also, this piece is historically significant because it played a big role in litigating a community social conflict (which is no less important for having been (being?) mostly below the surface), and set the stage for a lot of further discussion. I think it's very important that "write a nigh-irrefutable argument about philosophy of language, in order to strike at the heart of the substantive disagreement which provoked the social conflict" is an effective social move in this community. This is a very unusual feature for a community to have! Also it's an absolutely crucial feature for any community that aspires to the original mission of the Sequences. I don’t think it’s a coincidence that so much of this site’s best philosophy is motivated by efforts to shape social norms via correct philosophical argument. It lends a sharpness and clarity to the writing which is missing from a lot of the more abstract philosophizing.
This sort of thing is exactly what Less Wrong is supposed to produce. It's a simple, straightforward and generally correct argument, with important consequences for the world, which other people mostly aren't making. That LW can produce posts like this—especially with positive reception and useful discussion—is a vindication of this community's style of thought.
This isn't exactly a confusion about the model itself, but this seems like the right place to ask this question:
What areas of the world are people able to predict better once they've internalized the "simulacrum levels" model? Like, if I go through all the effort of learning which statements and behaviors are "level 1" or level 3" and what principles go into those distinctions and how the levels relate to each other, then in what way will I be better able to navigate the world?
I ask because this is a very esoteric theory which I only partially understand after ~a couple hours of serious effort, and some people clearly think there's a big payoff for really internalizing it. However, so far the justification I've seen people claim for the payoff has always been in terms of subjective insight and the feeling of understanding, not in terms of improved ability to navigate social situations or predict the trajectories of groups or avoid dangerous people, or any similar feats which I might expect a person could perform if they had a true theory in this area.
In other words, what's the argument that these beliefs pay rent?
Huh, in the past I've used Calendly pretty heavily from both ends, and never experienced anything like the issues you describe.
Having written this out, I may start pinging people for confirmation after filling out their calendlys...
Probably a good idea. Still, I suspect this will only partially solve your problem, considering what seems to be the attitude of the people you're scheduling with.