This essay began as part one of a longer piece.  Part one is standalone and "timeless." Part two is focused on the local dynamics of the EA/rationality/longtermist communities and LessWrong in November of 2021.  Following wise advice from Zack_M_Davis, I've split them into two separate posts.  Nevertheless, I recommend that people intending to read both seriously consider reading them back-to-back, so that the content of this one is fresh in the mind.  It's both something of a prerequisite and also relevantly context-setting.


Concentration of force is a military concept (sometimes referred to as "mass").  It used to be concentration of forces, until innovations like machine guns and cruise missiles made gathering all of your actual personnel together into more of a liability.

The idea is simple.  Essentially, there is a difference between relevant moments and irrelevant moments.  Battles and non-battles, moments of engagement and moments between engagements.

At each relevant moment, you want to project locally superior or overwhelming force. Perhaps this means having the most soldiers/guns/tanks/planes actually present, or perhaps this just means having the right missiles pointed in the right directions.

If you are good at coordination and maneuver, you can concentrate force in most or every engagement, and consistently win even against an overall larger or more powerful opponent.  This is how guerrilla warfare works—you choose the time and place of conflict in order to ensure that you outnumber the enemy in each specific encounter, and you fade into the mists before their reinforcements arrive.

Red wins each of the depicted engagements handily.

I claim that the Grey Tribe generally, and rationalists/longtermists/EAs more specifically, and LessWrong the website and community even more specifically, are systematically and spectacularly failing at concentration of force. That none of those groups puts anything like sufficient strategic energy into ensuring that critical mass coheres at crucial moments, and that each would benefit from optimizing their ability to do so quickly, reliably, and effectively, and from thinking in terms of concentration of force as a matter of habit.

I anticipate a (reasonable) objection along the lines of "mistake theory rather than conflict theory!" or "generous tit-for-tat rather than vengeful tit-for-tat!" and I assert a further subclaim that the above advice is every bit as relevant for nonviolent and nonconfrontational frames.  Actual conflicts are a vanishingly small subset of the times when concentration of force is a relevant principle; the "force" in question could just as easily be e.g. "calm, generous, charitable, level-headed, clear-minded, skilled communicators arriving at exactly the moment when things were about to become wastefully contentious and adversarial."

(Indeed, that's a preview of the recommendation I have for LessWrong specifically.)

TAPs: A Motivating Example

There's a picture I tend to draw quite frequently when giving people crash courses in CFAR-esque rationality, and it looks like this:

The idea behind the picture is that you're trucking along, living a generally good and happy life, and then something happens, and you find yourself in the sad timeline. You ate an entire package of Oreos, despite intending to lose weight.  You got in another fight with your romantic partner, despite really not wanting to.  You road raged, you failed to finish the presentation before the deadline, you spent all evening on Reddit instead of thinking about your research, you somehow never called them back and now it's too awkward.

The point of showing people this picture is to draw their attention to two key facts: 

  1. For most goals and values, there actually exists a moment-of-departure from a path consistent with the positive outcome and a path consistent with the negative one. There is an identifiable point at which one of these outcomes becomes distinctly more likely than the other.
  2. The paths tend to get farther and farther apart over time.  It's rare that one instantaneously and unrecoverably leaps from 🙂 to 🙁; most of the time, there is a shift in trajectory and one's prognosis worsens as continued-progress-along-the-wrong-path compounds.

You could think of the distance between the dotted and solid lines as a measure of the total effort required to make it back to the better timeline.  The quicker you notice that you're on the wrong track, the shorter the distance back to the right one.  The less time you've spent accelerating in the wrong direction, the less inertia you have to overcome.

Which leads to one of the key actionable insights of trigger-action plans (known in the literature as implementation intentions): there are times where the total effort required is zero, or close enough—e.g. simply catching the moment when you would have made the unfortunate switch, and then not doing so.  In many, many cases, an epsilon of prevention is worth an omega of cure.

One of the toy examples for CFAR's TAPs class is a person struggling with their sweet tooth:

How hard is it to stop eating Oreos once the package is open on the table in front of you? 

Very.  I can do it sometimes but absolutely can't count on it as a reliable strategy.

How hard is it to only take a few Oreos out of the package in the first place?  Put them on a plate, and leave the package in the cupboard? 

Still pretty hard.

How hard is it to not go to the cupboard at all, when you're in the grips of an Oreo craving?

Still pretty hard.

What about not buying the Oreos, when you're standing in the supermarket aisle?

Easier, but still not easy; the allure of the package is strong.

What about not going down that aisle in the first place? 

Better, but still feels iffy.  Still feels like it requires effort, like I'll be fighting myself the whole time I'm in the grocery store.

What about, at the moment of grabbing a shopping cart, pausing to ask whether this is an Oreos-type shopping trip, or not? 

There we go.

For the (real) person in the example, triggering off of a stimulus outside the grocery store provided sufficient distance from the reality-distortion field of the Oreos that it was possible to make a sober yes-or-no call and stick to it, without ongoing effort or indecision.  To sort of fortify against the urge, before it had even appeared.

This is an instance of effective concentration of force, and it illustrates the key point—that it (often) doesn't take much.  It just takes a little in the right place.

Many people try to solve their TAPs-shaped problems by shotgunning effort all over the place, and most would benefit from asking themselves:

If I had only thirty total seconds per day of conscious awareness and available willpower, and otherwise would be stuck on autopilot and following my own personal path of least resistance, would I be able to solve this problem?

The answer is "yes" far more often than people (who usually haven't actually tried checking) tend to think.

There is a background assumption baked into all of this that is rarely made explicit, and defended explicitly, and that is that the little stuff actually matters.  Like keeping an extremely heavy rock balanced on its tip—it can be done with very little strength, as long as you keep nudging it back toward its equilibrium, never allowing it to build up momentum.  The converse of "it doesn't take much [to make things go well]" is that it doesn't take much to make them go badly, either.  There are steep slopes and feedback loops in both directions.

Amazon Rankings: A Case Study

Readers will be able to remind me whether the cartoonist in this three-quarters-remembered anecdote is Ryan North (of Dinosaur Comics) or Zach Weinersmith (of SMBC); I was unable to dig up the details.

At some point early in the past decade, though, one of these two men was attempting to publish their very first Actual Book™, and had a clever scheme to leverage their existing (relatively small) audience.  They posted a message saying approximately the following:

"Hey, everybody, I've got a book coming out soon.  It'll be available on Amazon, and if you were planning to buy it, do me a favor and buy it between the hours of [time] and [time] on [specific day]."

They went on to explain that, because of the way Amazon's algorithms were structured, if enough people purchased the book during a small enough window, they could punch their way right onto the automated bestseller list, which would then catch the attention of the broader public (both because many, many more people would see the listing, and also because that's one heck of a story).

In this day and age, such social engineering schemes come across as somewhat passé, but at the time, this was a fairly unprecedented hack, taking advantage of a relatively underexploited fulcrum.  As I recall, it actually worked—with just a few hundred or low-thousands of purchases, the book did rocket to the front page, and did quite well afterward as a result.

It wasn't that our plucky cartoonist commanded a huge army of supporters.  It was the fact that he knew what to do with the small number he had.  By effectively concentrating the available force, he achieved an outsized effect, and he did so on purpose.

(For another example of this principle in practice, look here.)

The Culture War is a Guerrilla War

It's not the case that there is always a single decisive moment.  Sometimes, the relevant quality is the ability to repeatedly concentrate force.

Many LWers will already be familiar with the concept of evaporative cooling as applied to small subcultures (the essay is short, and worth a read if you haven't encountered it before; it's 98th-percentile in my opinion).

The metaphor of evaporative cooling also works to explain shifts in the Overton window of the larger context culture.

Consider, as a case study, the practice of parents in small towns and cities leaving their young children in the car for a few minutes while they run into the grocery store.  This is a behavior my own parents engaged in, as well as the parents of approximately all of my friends and classmates circa 1990.  The base rate of disaster on this activity was very low, as far as I can tell.

However, a few of those rare disasters memorably captured broad attention, and "leaving your kids in the car" acquired a slightly disreputable tinge.

As a result, those parents who were some combination of:

  • Most anxious about their children's safety
  • Least in-need of the benefits that leaving-your-kids-in-the-car provides, and
  • Most sensitive to social disapproval

... dropped off, and stopped doing it.  Some, no doubt, stopped entirely, while others just cut back on the margin.

This meant that the population of parents who continued to leave their kids in the car had a slightly higher proportion of parents who were:

  • Slightly less attentive to their children's safety
  • Slightly more invested in their ability to run into the store without their kids, and
  • Slightly less responsive to social pressure

Which, in combination, had the effect of making the overall class of [kids left in cars] slightly more dangerous in actual fact, while also raising the heat in the discourse about the behavior (because those still actively defending it were more threatened by the prospect of its outlaw).

Thus, after a little time, the next layer of reasonable moderates found themselves slightly less comfortable being on Team Leave Your Kids In The Car, and stopped doing it, or at least stopped defending it in public.

Fewer respectable defenders; fewer responsible practitioners.  The tinge of disrepute strengthened, with reason.  The base rate of the behavior dropped further in response. The people still engaging in it were yet more desperate, die-hard, and bull-headed, which peeled another layer of moderates away.  Another incident or two occurred, confirming the suspicion that the behavior itself was fundamentally dangerous, and that the people engaging in it were generically irresponsible.  The situation polarized.  The middle ground dissolved.  Eventually, all that remained were two tiny, armed camps made up of the small number of people still invested in shouting about it, flying the flags PROTECT CHILDREN and SAVE OUR FREEDOMS.

And for everyone who was just trying to go about their daily lives, it was no longer worth it to leave their kids in the car, even when it was eminently safe and reasonable to do so. It wasn't worth the risk, it wasn't worth the hassle, it wasn't worth the reputational damage and the dirty looks and the off chance of someone calling Child Protective Services and really ruining your day.  Running errands simply got harder for the median and modal parents, but it was cheaper to pay the cost of keeping your kids with you or hiring a babysitter than to put forth the extraordinary amount of effort it would take to reclaim that tiny patch of territory in the name of sanity and reasonableness (not least because any such campaign would first have to do a ton of work just to differentiate itself from the crazies and their counterproductive enthusiasm).

The above story is a little too pat, and abstracts away some important detail, but it does gesture adequately in the direction of a very real phenomenon. When Something Goes Wrong, what usually happens is not that our society sits down and says "ah, here's a situation where we don't actually have clear, legible, defensible norms and policy. Let's assess the tradeoffs and come to a sensible consensus."

Instead, norms evolve in response to incentives that are often locally overwhelming at every point.  When a Concerned Citizen™ spots a pair of four-year-olds in the car in the parking lot of the local grocery store and starts shouting about it and calls the police, each other individual customer has more to lose by getting involved at all than by simply turning the other way.

Meanwhile, in any random group of a thousand Americans, there will be far more who are moved by the immediate and viscerally salient stimulus of kids-who-look-like-they-might-need-protecting than those moved by the more distant and abstract harm done to norms of non-panic and non-interference. And among those who might be predisposed to object, there will be many who will flinch away in recognition that the Concerned Citizen could likely effectively paint them as Someone Who Doesn't Care About Kids (or at least an apologist for such), which is a substantially more powerful social weapon than Someone Who Doesn't Care About The Long-Term Ramifications Of Small Failures To Put Things In Perspective And The Tendency Of Those Small Failures To Compound.

As for the police officer, or any other authority figure who arrives on the scene—many of them will know in their heart of hearts that the situation presents no real threat or concern, but it's one thing to know that in your heart of hearts, and it's another thing to dismiss the concern and disperse the crowd on your own authority.  Doing so just shifts the crosshairs onto your own head—you're now the one on the hook if the Concerned Citizen is committed to making a stink, or if one of your superiors disagrees with your assessment, or if—heaven forbid—something bad actually does happen, after you stuck your neck out and gave your own personal stamp of approval—hmmm—on second thought, let's just have you talk to CPS, they're the ones who are actually qualified to handle these sorts of things

In short, the process driving the evolution of the norm is a guerrilla war.  The forces of reason and restraint and quantitative analysis are everywhere outnumbered in practice; it matters little that [a supermajority of people would agree that most kids-left-in-the-car scenarios pose essentially zero risk] if you cannot count on them to actually show up in the parking lot when you're under fire.  Team Histrionics can more effectively concentrate force, and so Team Histrionics wins far more battles than it loses.

(Other examples of this dynamic include the evaporation of casual physical affection between heterosexual male friends, the disappearance of a wide range of adult mentorship relationships with children and teenagers, and the general rise of fear-of-litigiousness and the resulting increase in self-protective bureaucratic red tape.)

Morale and Momentum

Of course, in a literal military conflict, even victories of concentrated force come with a cost.  Troops die or are injured, and are not easily replaced.  Ammunition is spent. Equipment breaks down or is destroyed.  In the original diagram at the top of this essay, the balance of power was 43(B) : 15(R).  If red were to successfully take out the six surrounded black units, at the cost of (say) four of its own, the new balance would be 37(B) : 11(R).  Iterate a few more times, and even if red continues to be half again or more as effective as black, they'll still nevertheless eventually be unable to muster decisive force in any given engagement. 

Surviving red forces maneuvering for a second round of engagements.


The resulting balance of force after the second round: 33(B) : 9(R).

In the world of social warfare, though—especially in the digital age—this doesn't really happen.

In social warfare, quantities like total available personnel or total materiel are largely replaced by quantities like morale or zealotry.

And morale and zealotry anti-attrit.  They snowball, rather than depleting.  Success breeds enthusiasm.  Each local victory of red over black energizes and inspires, leading to an increased rather than decreased willingness-to-engage in the future.  Anonymity and easy access makes "showing up" extremely low-cost and often quite high-reward.

Symmetrically, defeat sets up a discouragement spiral.  For every hold-the-line true believer shouting about the potential power of the silent majority, there are three or five or ten others looking at the situation and noticing that hey, even though most people agree with this point, it nevertheless seems to be a bad idea to raise this flag?

(And that fact itself is pretty demoralizing.)

It does not take many instances of either calling for help and not really getting it, or seeing someone else do so, to set up a self-fulfilling narrative that drastically reduces the rate of people on the black team even bothering to try.

Cancel Culture, Abridged

The Cancel Culture Essay™ will be released on some other day, but it's worth noting that cancellations as a class are just straightforwardly an instance of concentration of force (and their relative effectiveness a pretty strong endorsement for the principle).  A highly motivated minority coordinates to ensure that anyone who runs afoul of the shared goal will receive a mountain of headache far in excess of anything they're accustomed to or capable of dealing with, and most people buckle under the pressure

Actually, scratch that; most people can see what's coming and simply choose to get out of the way—like the filibuster, the threat is sufficiently credible that it usually doesn't actually have to be carried out.  

This is true for both right-wing and left-wing cancellations; agnostic to the justification or righteousness of any given cancellation on either side of the culture war, it seems difficult to claim with a straight face that most of the orgs and groups withdrawing their support from various individuals are doing so because the leaders of those orgs and groups personally care.

Some absolutely do.  That much is clear.

But the reasonable prior is that most of them simply do not want the headache.  They do not want the negative press, they do not want the protestors, they do not want to have to explain to their shareholders why they're being dragged on Twitter when they could have just nipped this in the bud, why would you choose this hill to die on, from what I heard it sounds like the guy is kind of a dirtbag anyway—

The trouble with fighting for human freedom is that one spends most of one's time defending scoundrels. For it is against scoundrels that oppressive laws are first aimed, and oppression must be stopped at the beginning if it is to be stopped at all.

Many people quite reasonably do not care to publicly spend their own resources defending scoundrels, or people who unfortunately resemble scoundrels.  Far easier to have one awkward conversation, where you say "Hey, look, I'm really sorry, I get that this sucks for you, but like—you get it, right?  It's not that I'm unsympathetic, it's just that—"

(Gestures vaguely)

There have been relatively few cancellations where, if votes could have been cast anonymously across the entire population, a supermajority or even a straight majority would have been strongly in favor of so-and-so losing whatever position or status they held.

But once it becomes clear that there is a mobilized group willing not only to punish, but also to punish non-punishers—and once it's clear that that group can in fact swiftly and effectively concentrate force—it's no surprise that most of the non-punishers go dark.  It doesn't matter that they outnumber the zealots ten to one—it's a stag hunt, and absent reliable coordination, the only reasonable choice is rabbit.

Principle, In Brief

The general lesson, I hope, is clear.  To restate it:

It's often not how much force you can bring to bear, so much as whether you can apply that force effectively.  

The effectiveness of force application often depends on its concentration—on whether you can amass locally superior force at the actual decisive moment.

(Both in cases where there is a single decisive moment, as in the Amazon example, or many such moments, as when cultural norms are in flux.)

Attention to this principle is generally lacking, and individuals and groups seeking to be more effective would do well to take the following advice:

0. Take seriously the idea that very small shifts in momentum really can actually snowball; do not assume by default that noise will swamp small influences and do not be dismissive of small interventions just because they are small. 

1. Look for moments when small applications of force will be unusually effective; prioritize interventions according to how amenable they are to even quite small expenditures of resources.

2. Do what you can to make whatever-constitutes-force in the domain of your choosing mobile and responsive, such that it can be concentrated very quickly.  The relevant moment is not always predictable in advance, and the "side" which can more reliably cohere decisive force faster will win more battles (many of them before they even start).


For the locally relevant essay that was originally part two, click here.

New to LessWrong?

New Comment
23 comments, sorted by Click to highlight new comments since: Today at 12:38 PM

Do what you can to make whatever-constitutes-force in the domain of your choosing mobile and responsive, such that it can be concentrated very quickly.  The relevant moment is not always predictable in advance...

This... rather severely understates the problem. Consider the Oreos example: not only is the relevant moment not obvious in advance, it's also not obvious in the moment itself, or even obvious in hindsight. The person has presumably walked into the supermarket dozens or hundreds of times, and not once did it ever rise to their attention that this was a "relevant moment".

Mobility and responsiveness may suffice for some kinds of warfare, but for the more general problem of responding in "relevant moments", I don't think that's usually the hard step. The hard step is figuring out where those key points are at all, whether beforehand or at the time or even in hindsight (so the insight can be used for the future).

There are enough situations where the missing strategic piece is a failure to concentrate your forces to make it a good concept to drill into people's heads. I'm a new MS student in a biomedical engineering lab, and a big boost to my learning rate was reading and visualizing/acting out protocols before my instructor showed them to me for real. Instead of spreading out my attention to trying to learn ever more things, I prioritize being rapidly successful when I'm being taught something new. Everything works much better that way.

Also, sometimes, thinking in terms of force concentration can make it easier to perceive relevant moments, because you know what you're searching for. It's related to decomposing a big problem into its constituent parts, a skill that many people need to practice before it becomes natural. There's plenty of low hanging fruit here for people who don't already think habitually in these terms. At a certain point, you've likely exhausted the gains from problems whose decomposition and "relevant moments" are obvious.

I note that this and the above point are compatible/not necessarily in disagreement.

thinking in terms of force concentration can make it easier to perceive relevant moments, because you know what you're searching for

Yeah.  One of the most revelatory questions anyone ever asked me was "so, you say you're searching for romantic partners.  Where are they?  Like, literally, right now, right this moment, where in the physical world are the kinds of people you think you want to date and marry?  Are they at the library, the park, the bar, the skating rink—what?"

Which led me to realize that I'd been putting most of my efforts in places where those people weren't.

(Bullets exert far less force than broadswords.)

This seems inaccurate.


Swords are unlikely to be swung faster than baseball bats are (slower, if anything). Even assuming equal swing speed, it would seem that bullets (depending on weapon, caliber, etc.) exert comparable or greater force than broadswords.

Thanks!  Will edit or remove.

The trouble with fighting for human freedom is that one spends most of one's time defending scoundrels. For it is against scoundrels that oppressive laws are first aimed, and oppression must be stopped at the beginning if it is to be stopped at all.

There's certainly something to that. But in the other direction, there's the Claudette Colvin vs Rosa Parks anecdote, where (as I understand it) civil rights campaigners declined to signal-boost and take a stand on a case that they thought the general public would be unsympathetic to (an unmarried pregnant teen defender), and instead waited for a more PR-friendly test case to come along. We can't know the counterfactual, but I see that as a plausibly reasonable and successful strategic decision.

The toxoplasma of rage dynamic is to go out of your way to seek the least PR-friendly test cases, because that's optimal for in-group signaling. I view that as a failure mode to be kept in mind (while acknowledging that sometimes defending scoundrels is exactly the right thing to do).

I believe your Amazon rankings example refers to Ryan North's (and his coauthors') Machine of Death.

Thank you!  That clicks in memory.

Great article. One comment - the Oreo story is an explicitly negative example. You are choosing to ‘not’ to do something. I’m not sure the effective force principle works so well for positive actions. To take one example - I am struggling to exercise at the moment. To my mind the key to starting to exercise again is establishing a routine (e.g. go for a run every morning) but this seems to be a different framework to the idea of applying maximum force at the opportune moment. How would the effective force principle work in this context?

Where is the "smallest nudge" sufficient to give you the momentum you need?

For instance, while struggling to exercise, I found that just going and physically touching my treadmill each morning was enough.  It was a small enough commitment that it didn't take much energy and didn't meet much resistance, but it caused me to consciously reconnect with my desire to exercise, each morning, such that I found myself more naturally thinking "okay, when today might be a good time to do this?"

This is interesting. Given that I do not have a treadmill, and want to go for actual walks, what would be your suggestion? Perhaps touching my shoes?

Yeah, my first suggestion is some kind of (extremely) low-effort ritual, early in the day, that causes you to notice the potential for taking a walk, without any pressure to do anything about that fact.

One trick that worked for me in such a scenario, is to make refusal to exercise costly in terms of hassle and inconvenience.

For example, when I'm done exercising, I put my dumbbells on my gaming chair. Thus, it will be impossible for me to go sit and play the next day without actually lifting them again, and if I lift them again...its not that hard to just keep on lifting until I'm exhausted. 

Trick number two, was to ask my SO to remind me to work out every other day before sleep. If I refuse, to do so, I would have to face a minor embarrassment of having to explain why, to person who knows all my lies and self-lies. Moreover, my SO claimed sexual preference for a physically fit partner over a pudgy one, so I'm indirectly reminded that refusing to lift is detrimental to my sexual pleasure in the long run, while getting and keeping a sixpack has enormous and enthusiastically noticed benefits.

So in effect, the concentration of force here is a pile of small inconveniences for non-compliance, and small rewards for compliance, that themselves can be established with minimum effort.

You Won’t Believe My Morning feels to me like a solid example of an attempt at concentration of force. It was posted in March of 2020, just as the coronavirus was developing into a big thing. Urban basically is making the point that we should treat it as a wake-up call and realize that there are even bigger things at stake, existential risks for instance.

I think binding TAPs to key moments in time makes them unnecessarily fragile. I instead think of the triggers as affordances in the space of representations. E.g. 'this isn't an oreo sort of trip' feels more available as a context setting representation at another relevant boundary: the inside-outside the store distinction.

Flexibility of representation is a double edged sword. It provides more affordance points and allows more transfer learning and modularity. It also gives you more outs and the chance to cold read-confirmation bias yourself into thinking wrong things instead of using the representations to cut down on the number of counterfactuals that you need to consider (a key feature).

Curated. I like this concept. It's not the total force theoretically but how much can be brought to bear. I especially appreciate the later application to moderation.

I wonder, if there is a plausible way to memetically ruggedize the society against such concentrations of cultural force? I have some ideas, but none of them feel strong enough:

  • make concentration of social force taboo (Ganging-up is bad!) But I don't see how to achieve that without using CoF ourselves.
  • make "punish non-punishers" particularly taboo (How?)
  • encourage social contrarianism
  • encourage sealioning as a response to social CoF
  • preemptively ruggedize laws and contracts against future soc-CoF attacks
  • (risky) train social media algorithms to notice and flag CoF-like internet behavior

None of the above feel like they would be enough, but anything more powerful like that would introduce more problems than solutions.

I think this post describes an important idea for political situations.

While online politics is a mind-killer, it (mostly) manages to avoid "controversial" topics and stays on a meta-level. The examples show that in group decisions the main factor is not the truth of statements but the early focus of attention. This dynamic can be used for good or bad, but it feels like it really happens a lot, and accurately describes an aspect of social reality.

If I had only thirty total seconds per day of conscious awareness and available willpower This is extremely relatable (I have ADHD and a long history of poor sleep)

Thanks for the very interesting dynamic you’ve presented. It seems to be a subset of the coordination problems seen in iterative prisoner dilemma games when there are more than 2 players, I’m not sure what exact name to call it. 

I imagine this is the primary logistical reason why pyramid-like hierarchies formed in the first place in human societies, to solve such coordination problems.

This essay began as part one of a longer piece.  Part one is standalone and "timeless." Part two is focused on the local dynamics of the EA/rationality/longtermist communities and LessWrong in November of 2021.  Following wise advice from Zack_M_Davis, I've split them into two separate posts.  Nevertheless, I recommend that people intending to read both seriously consider reading them back-to-back, so that the content of this one is fresh in the mind.  It's both something of a prerequisite and also relevantly context-setting.

Link to part two:

Part two has a similar introduction:

This is an essay about the current state of the LessWrong community, and the broader EA/rationalist/longtermist communities that it overlaps and bridges, inspired mostly by the dynamics around these three posts.  The concepts and claims laid out in Concentration of Force, which was originally written as part one of this essay, are important context for the thoughts below.

Great Article! ;)