This is a post from The Last Rationalist, which asks, generally, "Why do rationalists have such a hard time doing things, as a community?" Their answer is that rationality selects for a particular smart-but-lazy archetype, who values solving problems with silver bullets and abstraction, rather than hard work and perseverance. This archetype is easily distractible and does not cooperate with other instances of itself, so an entire community of people conforming to this archetype devolves into valuing abstraction and specialized jargon over solving problems.

New Comment
21 comments, sorted by Click to highlight new comments since: Today at 1:20 AM

I’m sort of confused about ‘slack’ not only getting bundled into the cluster of concepts here, but bundled so hard that the post was named after it.

I agree with the general claim about LW selection effects but slack just seems like a thing a) most people need and b) something that broader societal forces are systematically destroying

I think it's a sort of Double Entendre? It's also possible the author didn't actually read Zvi's post in the first place. This is implied by the following:

Slack is a nerd culture concept for people who subscribe to a particular attitude about things; it prioritizes clever laziness over straightforward exertion and optionality over firm commitment.

In the broader nerd culture, slack is a thing from the Church of the Subgenius, where it means something more like a kind of adversarial zero sum fight over who has to do all the work. In that context, the post title makes total sense.

For an example of this, see: https://en.wikipedia.org/wiki/Chez_Geek

Huh, that might make sense. Still seems a weird thing to name the post.

(I don't think slack is at all about optionality over firm commitment – rather, you need to have slack to be able to make the commitments that you want to make.)

This archetype is easily distractible and does not cooperate with other instances of itself, so an entire community of people conforming to this archetype devolves into valuing abstraction and specialized jargon over solving problems.

Obviously there are exceptions to this, but as a first pass this seems pretty reasonable. For example, one thing I feel is going on with a lot of posts on LessWrong and posts in the rationalist diaspora is an attempt to write things the way Eliezer wrote them, specifically with a mind to creating new jargon to tag concepts.

My suspicion is that people see that Eliezer gained a lot of prestige via his writing, this is one of the things he does in his writing (name concepts with unusual names), and I suspect people make the (reasonable) assumption that if they do something similar maybe they will gain prestige from their writing targeted to other rationalists.

I don't have a lot of evidence to back this up, other than to say I've caught myself having the same temptation at times, and I've thought a bit about this common pattern I see in rationalist writing and tried to formulate a theory of why it happens that accounts not only for why we see it here but also why I don't see it as much in other writing communities.

My suspicion is that people see that Eliezer gained a lot of prestige via his writing ... and I suspect people make the (reasonable) assumption that if they do something similar maybe they will gain prestige from their writing targeted to other rationalists.

I'd like to emphasize the idea "people try to copy Eliezer", separately from the "naming new concepts" part.

It was my experience from Mensa that highly intelligent people are often too busy participating at pissing contests, instead of actually winning at life by engaging in lower-status behaviors such as cooperation or hard work. And, Gods forgive me, I believed we (the rationalist community) were better than that. But perhaps we are just doing it in a less obvious way.

Trying to "copy Eliezer" is a waste of resources. We already have Eliezer. His online articles can be read by any number of people; at least this aspect of Eliezer scales easily. So if you are tempted to copy him anyway, you should consider the hypothesis that you actually try to copy his local status. You have found a community where "being Eliezer" is high-status, and you are unconsciously pushed towards increasing your status. (The only thing you cannot copy is his position as a founder. To achieve this, you would have to rebrand the movement, and position yourself in the new center. Welcome, post-rationalists, et al.)

Instead, the right thing to do is:

  • cooperate with Eliezer, especially if your skills complement his. (Question is, how good is Eliezer himself at this kind of cooperation. I am on the opposite side of the planet, so I have no idea.) Simply said, anything Eliezer needs to get done, but doesn't have a comparative advantage at, if you do it for him, you free his hands and head to do things he actually excels at. Yes, this can mean doing low-status things. Again, the question is whether your are optimizing for your status, or something else.
  • try alternative approaches, where the rationalist community seems to have blind spots. Such as Dragon Army, which really challenged the local crab mentality. My great wish is to see other people build their own experiments on top of this one: to read Duncan's retrospective, to make their own idea of "we want to copy this, we don't want to copy that, and we want to introduce these new ideas", and then go ahead and actually do it. And post their own retrospective, etc. So that finally we may find a working model of a rationalist community that actually wins at life, as a community. (And of course, anyone who tries this has to expect strong negative reactions.)

I strongly suspect that internet itself (the fact that rationalists often coordinate as an online community) is a negative pressure. Internet is inherently biased in favor of insight porn. Insights get "likes" and "shares", verbal arguments receive fast rewards. The actions in real world usually take a lot of time, and thus don't make a good online conversation. (Imagine that every few months you acquire one boring habit that makes you more productive, and as a cumulative result of ten such years you achieve your dreams. Impressive, isn't it? Now imagine a blog, that every few months publishes a short article about the new boring habit. Such blog would be a complete failure.) I would expect rationalists living close to each other, and thus mostly interacting offline, to be much more successful.

The only thing you cannot copy is his position as a founder. To achieve this, you would have to rebrand the movement, and position yourself in the new center. Welcome, post-rationalists, et al.

The term post-rationalist was popularized by the diaspora map and not by people who see themselves as post-rationalists and wanted to distinguish themselves.

To the extent that there's a new person who has a similar founder position right now that's Scott Alexander and not anybody who self-identifies as post-rationalist.

The term post-ra­tio­nal­ist was pop­u­larized by the di­as­pora map and not by peo­ple who see them­selves as post-ra­tio­nal­ists and wanted to dis­t­in­guish them­selves.

Here's a 2012 comment (predating the map by two years) in which someone describes himself as a post-rationalist to distinguish himself from rationalists: https://www.lesswrong.com/posts/p5jwZE6hTz92sSCcY/son-of-shit-rationalists-say#ryJabsxh7m9TPocqS

The post rats may not have popularised the term as well as Scott did, but I think that's mostly just because Scott is way more popular than them.

To the ex­tent that there’s a new per­son who has a similar founder po­si­tion right now that’s Scott Alexan­der and not any­body who self-iden­ti­fies as post-ra­tio­nal­ist.

Well, the claim was about what the post rats were (consciously or not) trying to do, not about whether they were successful.

And I think Scott has rebranded the movement, in a relevant sense. There's a lot of overlap, but SSC is its own thing, with its own spinoffs. E.g. I believe most SSC readers don't identify as rationalists.

("Rebranding" might be better termed "forking".)

Will Newsome did you the term before, but I'm not aware of it being used to the extent that it's worthwhile to speak of him as someone who planned on being seen as a founder. If that was his intention he would have written a lot more outside of IRC.

I agree with a bunch of these concerns. FWIW, it wouldn't surprise me if the current rationalist community still behaviorally undervalues "specialized jargon". (Or, rather than jargon, concept handles a la https://slatestarcodex.com/2014/03/15/can-it-be-wrong-to-crystallize-patterns/.) I don't have a strong view on whether rationalists undervalue of overvalue this kind of thing, but it seems worth commenting on since it's being discussed a lot here.

When I observe the reasons people ended up 'working smarter' or changing course in a good way, it often involves a new lens they started applying to something. I think one of the biggest problems the rationalist community faces is a lack of dakka and a lack of lead bullets. But I guess I want to caution against treating abstraction and execution as too much of a dichotomy, such that we have to choose between "novel LW posts are useful and high-status" and "conscientiousness and follow-through is useful and high-status" and see-saw between the two.

The important thing is cutting the enemy, and I think the kinds of problems that rationalists are in an especially good position to solve require individuals to exhibit large amounts of execution and follow-through while (on a timescale of years) doing a large number of big and small course-corrections to improve their productivity or change their strategy.

It might be that we're doing too much reflection and too much coming up with lenses. It might also be that we're not doing enough grunt work and not doing enough reflection and lenscrafting. Physical tasks don't care whether we're already doing an abnormal amount of one or the other; the universe just hands us problems of a certain difficulty, and if we fall short on any of the requirements then we fail.

It might also be that this varies by individual, such that it's best to just make sure people are aware of these different concerns so they can check which holds true in their own circumstance.

I've thought a bit about this common pattern [name concepts with unusual names] I see in rationalist writing and tried to formulate a theory of why it happens that accounts not only for why we see it here but also why I don't see it as much in other writing communities.

I see the pattern a lot in "spiritual" writings. See, for example, the "Integral Spirituality" being discussed in another recent post.

I have two thoughts on this.

One is that different spiritual traditions have their own deep, complex system of jargon that sometimes stretch back thousands of years through multiple translations, schisms, and acts of syncretism. So when you first encounter it you can feel like it's a lot and it's new and why can't these people just talk normally.

Of course, most LW readers live in a world full of jargon even before you add on the LW jargon, much of it from STEM disciplines. People from outside that cluster feel much the same way about STEM jargon as the average LW reader may feel about spiritual jargon. I point this out merely because I realized, when you brought up the spiritual example, that I wasn't given a full account of what's different about rationalists, maybe, in that there's a tendency to make new jargon even when a literature search would reveal existing jargon exists.

Which is relevant to your point and my second thought, which is that you are right, many things we might call "new age spirituality" have the exact same jargon-coining pattern in their writing as rationalist writing does, with nearly ever author striving to elevate some metaphor to the level of word so that it can becomes a part of a wider shared approach to ontology.

This actually seems to suggest then that my story is too specific and pointing to Eliezer's tendency to do this as a cause is maybe unfair: it may be a tendency that exists within many people, and there is something similar about the kind of people or the social incentives that are similar between rationalists and new age spiritualists that produces this behavior.

I point this out merely because I realized, when you brought up the spiritual example, that I wasn't given a full account of what's different about rationalists, maybe, in that there's a tendency to make new jargon even when a literature search would reveal existing jargon exists.

I don't think this is different for STEM, or cognitive science, or self-help. After having studied both CS and Math and studied some physics in my off-time, everyone constantly invents new names for all the things. To give you a taste, the first paragraph from the Wikipedia article on Tikhonov regularization:

Tikhonov regularization, named for Andrey Tikhonov, is the most commonly used method of regularization of ill-posed problems. In statistics, the method is known as ridge regression, in machine learning it is known as weight decay, and with multiple independent discoveries, it is also variously known as the Tikhonov–Miller method, the Phillips–Twomey method, the constrained linear inversion method, and the method of linear regularization. It is related to the Levenberg–Marquardt algorithm for non-linear least-squares problems.

You will find the same pattern of lots of different names for the exact same thing in almost all statistical concepts in the Wikipedia series on statistics.

The color coding that was discussed there isn't anything that the integral community came up with. Wilber looked around for existing paradigms of adult development and picked the one he liked best and took their terms.

I understand what Wilber knows when he says blue because I studied spiral dynamics in a context outside of Wilber's work. It's similar towards when rationalists take names of biases from the psychological literature that might not be known by wider society. It's quite different from EY making up new terms.

Wilber's whole idea about being integral is to take existing concepts from other domains.

(note: I may be part of the problem - I consider myself a subscriber to and student of the rationalist philiosophy, but not necessarily a member of whatever is meant by "rationalist community". I don't know if your definition includes me or not.)

This topic might benefit from some benchmarking and comparison with other "communties". Which ones seem more effective than rationalists? Which ones seem less? I've been involved with a lot of physical community groups (homeowner associations, local charities, etc.), and in almost no case would I say the community is effective on it's own - some have very effective leaders who manage to get a lot of impact out of the community.

Cooperation needs trust. Many rationalists are quite open towards people who are a bit strange and who would rejected in many social circles. I talked with multiple people who think that this creates a problem of manipulative people entering the community (especially the Bay Area community) and trying to get other people to help them for their own ends. In an environment like that it's makes sense that members of the community as less willing to share resources with other members of the community and there is less cooperation.

Seems to me that we have members at both extremes. Some of them drop all caution the moment someone else calls themselves a rationalist. Some of them freak out when someone suggests that rationalists should do something together, because that already feels too cultish to them.

My personal experience is mostly with the Vienna community, which may be unusual, because I haven't seen either extreme there. (Maybe I just didn't pay enough attention.) I learn about the extremes on the internet.

I wonder what would be the distribution in Bay Area. Specifically, on one axis I would like to see people divided from "extremely trusting" to "extremely mistrusting", and on another axis, how deeply are those people involved with the rationalist community. That is, whether the extreme people are in the center of the community, or somewhere on the fringe.

I don't think it's well modeled as one-dimension of trust. It feels to me like there's something like shallow trust where people are quite open to cooperate on a low level but quite unwilling to commit to bigger projects together.

I think I get what you mean.

Maybe this is somehow related to the "openness to experience" (and/or autism). If you are willing to interact with weird people, you can learn many interesting things most people will never hear about. But you are also more likely to get hurt in a weird way, which is probably the reason most people stay away from weird people.

And as a consequence, you develop some defenses, such as allowing interaction only to some specific degree, and no further. Instead of filtering for safe people, you filter for safe circumstances. Which protects you, but also prevents you from from possible gains, because in reality, some people are more trustworthy than others, and it correlates negatively with some types of weirdness.

Like, instead of "I would probably be okay inviting X and Y to my home, but I have a bad feeling about inviting Z to my home", you are likely to have a rule "meeting people in cafeteria is okay, inviting them home is taboo". Similarly, "explaining concepts to someone is okay, investing money together is not".

So on one hand you are willing to tell a complete stranger in cafeteria the story of your religious deconversion and your opinion on Boltzmann brains (which would be shocking for average people); but you will probably never spend a vacation together with people who are closest to you in intellect and values (which average people do all the time).

Yes, I think that's roughly where I'm pointing.

The silver bullet is to use a lot of lead bullets. Big if true, and appealingly elegant...