We all strive to be like the others - more exactly, like the others, but better. (Chapter 2)


Note: the quotes have been translated back from the French translation, so they probably slightly differ from the original text.

The End of Average is a 250-ish pages book published in 2016 by academic Todd Rose.

The main points

Rose describes the rule of average as the ubiquitous use, in major decisions that impact our lives, of a norm based on the average. From our grades at school to our career options, all the way through medical diagnoses, important institutions compare individuals with the average, with 2 goals:

  • assessing normality: what differs too much from the average is considered abnormal. For example, the developmental path for learning how to walk was determined in the 1950s by observing babies and averaging the result to get the "normal development path". Any child that diverges too much from that path will be closely looked for abnormalities.
  • assessing skill: a single variable is assumed to correlate with a large number of desired dimensions, and becomes very important in people's lives. For example, a student's understanding of a lesson is graded as a single note. Then, at the end of the year or semester, the notes in a class are summed up as a single average that is supposed to reflect the value of the student in that class. The higher a student is graded, the more intelligent and talented they are deemed, and the better their options.

In both cases, claims Rose, by averaging the data over the samples, important information is lost. It turns out that there are no less than 25 viable development paths for learning to walk, where the usual steps are taken in reverse, simultaneously, or even skipped. As for school grading, Rose picks examples both from research and from his personal experience to show that students that fail in the standard school system can thrive when the lessons are more adapted to their personal profile.

Born with modern statistics and acclaimed as a major methodological progress, the average-ist approach's limitations were overlooked for a long time, claims Rose. But now, with the help of computers' processing power, a more personalized treatment of information is possible and desirable: the individualist approach.

  • the average-ist approach: summing up the data, then analysing it; commits the ergodic error (attributing to a member a property of the group - more on that later)
  • the individualist approach: analysing the data, then summing it up; avoids the ergodic error

Impressions from reading

Rose has a slight tendency for imprecisions and corner cutting. For example, his "average" can mean anything from mean, to median, to simply norm, depending on the situation. This is a book aimed at convincing both the decision-maker and the layman of the interest of adopting the individualist approach, and it sacrifices accuracy for ease of reading and persuasion.

The central leitmotiv is that when you take a group of humans, there is no average member. One of the examples he gives is how the US Air Force, after a slew of deadly accidents in the 1940s, decided to collect the measurements of 4063 pilots on 10 dimensions. The conclusion was that not a single pilot was in the average for all 10 dimensions. The cockpit, which had been designed for the average pilot, was actually fitting no one. Upon that realization, the Air Force ordered that gear in cockpits be made adjustable, which drastically reduced accidents.

The thesis of the book has a simplicity and an intuitiveness that felt familiar, as if I was merely noticing, or piecing together, something that was at the edge of common knowledge. Of course, we are all outliers, no one is average. Then why do we, people and institutions, mostly forget this fact when we define our policies? For example, why do schools deliver the same curriculum for everyone?

While Rose mainly advocates for large changes in education and career management, there is a piece of advice that is directly actionable by anyone. Human behaviour, explains Rose, is determined by a combination of personality and situation. A person can be extroverted with friends and introverted with colleagues. Rose recommends to play into your strengths rather than struggling to correct your weaknesses. He explains how he managed to revert a difficult start at university by avoiding the courses where he risked meeting old schoolmates because he knew he wouldn't study seriously in their presence, and by choosing courses that were not recomended for him because they were difficult, but which topic particularly motivated him.


This is it for the short summary, but if you want more details, Rose's story about the rule of average starts with three guys.

Quetelet, Galton, Taylor

Adolphe Quetelet (1796-1874) is a Belgian astronomer. When the Belgian Revolution breaks out in 1830, threatening to postpone the completion of  his shiny new observatory in Brussels indefinitely, he suddenly takes an interest in social sciences.

To make up for the imprecision of human measures, astronomers of the time average the measures of multiple observers. The average is considered better than any individual measure. It's the right measure. Quetelet transposes that method into social science: gathering measurements about Scottish soldiers, he asserts that the resulting average is the right soldier, a Platonic ideal. That method will be used on a variety of topics like crime and marriage rates, and is a cornerstone of statistics. He shows that suicides, which were considered a deeply personal and irrational act, remain at a constant rate over the years. He creates the BMI!

Sir Francis Galton (1822-1911) is a fervent admirer of Quetelet and his statistical methods. He too is confident that human qualities can be summed up in a single trait.

According to him, if your intelligence was "outstanding", then in all likelihood so was your physical health, as well as your courage and your honesty.

But contrary to Quetelet, Galton sorts the population into ranks: the average are mediocre, and at the extremes are the outstanding and the idiots. Quetelet's average man and Galton's ranks will become instrumental in defining what we consider normal and what we consider successful.

These two ideas serve as guiding principles for our present education system, as well as the overwhelming majority of recruitment methods and employee evaluation systems all over the world.

Frederick Winslow Taylor (1856-1915) is, according to Rose, inspired by Quetelet's methods when he sets out to revolutionize the way factories work. Before him, the workers were basically free and even encouraged to reorganize the tools and machines however they saw fit. Taylor separates decision from execution, creating the manager class to do the former. The workers become adjuncts to the machines, interchangeable elements in the chain of fabrication. Taylor professes that he prefers an average worker who would follow orders blindly, to a skilled worker who would take initiatives.

This style of organization has spread way beyond the factory: Todd Rose gives present examples in retail (Walmart) and customer service.

In the span of fifty years - from 1890 to 1940 -, almost all our institutions came to evaluate everybody by their relation to the average.

At the time, adds Rose, the young universal education system was approaching its modern form. The tenets of Taylorism were incorporated so that education would prepare the average student to fulfill his role as an average worker: the students, batched by age rather than by interest or competence, followed nationally standardized lessons for nationally standardized durations (chosen to fit the average student), that ended when the school bell rang, a prefiguration of the factory bell. Students were graded with notes that average their results, and those notes were compared in order to determine who's average and who's outstanding. Teaching speed was optimized for the average learner, and the fast had to wait while the slow were left behind. Not much has changed since then.

The ergodic lure

But recently, researchers have started to question this model. Peter Molenaar, a researcher who interacted with and inspired Todd Rose, published in 2004 a manifesto against the ergodic lure. [1] He explains that according to the ergodic theory, the average can be used to make useful predictions about the individual elements if and only if the system is ergodic, which means:

  • all elements are identical
  • each element remains the same over time

The ergodic lure, or ergodic error, is attributing to a member the properties of a non-ergodic group. If you measure a group for typing speed and typing error, you will notice that the faster people type, the less errors they make, on average. If you want to reduce mistakes, you might be inspired to incentivize people to type faster. But individually, the opposite correlation is true. That's the ergodic error.

Todd Rose gives no precision on how "identical" elements have to be or how "constant over time", he simply takes as obvious that cells, genomes and human traits are not ergodic, which I'm inclined to accept, but that leaves open the question of how far we can extrapolate. The wikipedia page https://en.wikipedia.org/wiki/Ergodicity has a more rigorous introduction to the concept.

To solve the ergodic error of the average-ist approach, Rose presents the individualist approach, and names multiple researchers who support it despite the apathy of their colleagues.

The 3 Principles of the Individualist Approach

  • principle of discontinuity:

When a variable can be decomposed into multiple dimensions, and those factors are loosely correlated, then the average is misleading.

This feels intuitively right. The meaning of the average of multiple dimensions is based on what is common in those dimensions. If two dimensions have so little in common that they aren't correlated, then the average necessarily leaves out a lot of their meaning.

According to Rose, the Dow Jones is a good average because, even if the stock market is the grouping of thousands of individual ratings, they are correlated enough.

On the other hand, human measurements, IQ results and personality test results like the MBTI are not! I was surprised by this because I thought IQ was a measure of general intelligence. But Rose is adamant that intelligence is discontinuous, and shows two profiles of women who score identical at the WAIS test but have completely different results at the sub-tests. I'm not sure what to think about that.

Character traits like honesty or aggressiveness also happen to be discontinuous. According to a study cited in the book, where kids were tested for honesty in multiple environments, a kid who cheated at tests by copying on their neighbour, could refrain from cheating when asked to self-grade, and vice-versa. [2]

  • principle of context:

Essentialists and situationists fought for domination in the social sciences for the last decades. The essentialist attempt at explaining behaviour by a person's character fails. The situationist attempt at explaining behaviour by the situation also fails. What succeeds is the interaction between an individual character and a situation. Academic Yuichi Shoda calls that a "if... then" signature.

This reminded me of the fundamental attribution error, which says that we all lean the essentialist way when trying to explain other's behaviour, and the situationist way when explaining ours. Which makes sense, because they are both simplifications (edge cases) of the "if... then" model. We tend to see plenty of other people, but each in the same scenario (coworkers at work, online friends online...), so, failing to observe a given person in various scenarios, we infer that their behaviour is a consequence of their character. We personally experience multiple scenarios, but failing to observe what someone else would have done differently in our shoes, we infer that the situation dictated our behaviour.

  • principle of path:

There is no universal sequence in human development - no set of steps that everyone has to take to grow, learn, or meet one's objectives.

I didn't really question that claim until someone with a medical background pointed to me that there are some sequences in human development that are mandatory. On second thought, I can't imagine, say, the embryonic development would have much diversity.

To his defense, Rose does not pretend that all paths are equally viable. When he gives the exemple of a study on how children learn reading, he precises that a path taken by 10% of the children leads to serious learning problems. But using the individualist approach helps detecting those children so that appropriate help can be provided.

Individualist education

In the last part, Rose proposes 3 (overlapping) key changes to improve education:

  • deliver certifications, not degrees: divide the current long, fixed courses, into multiple shorter courses that the student can choose from. There is no reason that all paths should have the same length as is currently the case. And the student would only pay for what they take.
  • replace notes with skills: Instead of having a note that bundles a bunch of skills, have each skill validated atomically. Separate teaching from certification, so that a student can validate their knowledge however they acquired it (be it in a public or private university, with a tutor, online, or by themselves). This also makes it easier for recruiters to check whether an applicant's skill set matches the job description.
  • allow students to choose their path: This should also make changing career course during education easier. If you start with the goal of becoming a neuroscience researcher and you discover mid-way that you like interacting with people, you can reuse the certifications you already acquired to go for clinical psychology. And students wouldn't have to do courses they don't like (in Rose's view, a student is responsible for their path, so if they choose a path, it means they certainly like it).



The ideas of the book seem simple and intuitive enough that I doubt they will surprise the average LW reader. [3] The question of how widely they can be applied remains open, though. Rose seems to be interested only in education and career management. He does give examples of averaging giving wrong conclusions in medical care of cancer, and in the care for patients recovering from clinical depression. But he doesn't have a plan to improve those institutions.

Society by large is still running on the assumption that students, employees and citizens can be represented by a few key numbers by averaging their deeper characteristics. It seems to me that in recent years, some companies have started picking these low-hanging fruits. Big data companies have been analysing individual behaviour for years already in order to individualize advertisement. Online education and certification is rising and will probably soon become a serious challenger to traditional, more costly and rigid education. The End of Average has helped me make the connection between those innovations and a deeper theoretical framework. I'll keep it in my toolbox and see how helpful it turns out.




[1] A Manifesto on Psychology as Idiographic Science: Bringing the Person Back Into Scientific Psychology, This Time Forever by Peter Molenaar

[2] Hartstone et. May, Studies, vol. 1: Studies in Deceit

[3] Though, according to the principle of discontinuity, each reader should find one that is new for them

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 1:42 AM

On the other hand, human measurements, IQ results and personality test results like the MBTI are not! I was surprised by this because I thought IQ was a measure of general intelligence. But Rose is adamant that intelligence is discontinuous, and shows two profiles of women who score identical at the WAIS test but have completely different results at the sub-tests.

So am I understanding this correctly, the book says that the general intelligence factor doesn't exist? That sounds like a pretty big claim, given that the concept has received quite a bit of scrutiny and still seems to have emerged with psychologists having a consensus that yes, it really is a thing. Does it cite any supporting literature with similarly broad conclusions (not just "we got weird results from this one test" like the example you mentioned but "based on this it looks like the whole g-factor thing might be toast") or is this just the author claiming it?


It sounded more to me like it was saying iq is an abstraction that hides a lot of possibly-important complexity. Which is not the same thing as saying that iq doesn't exist or is useless.

Averages are more common than individualized approaches because they're easy, and often good enough. Individualizing education according to Rose's proposals is an attractive vision, and seems attainable, but it also represents a costly investment. I expect that in many cases, there is a benefit to improving the granularity of our data and using that to make better decisions. The question, in each case, is whether or not the benefit is worth the cost, and how to execute it successfully.

Fortunately, we don't have to go all the way to individualizing everything in order to reap benefits. Averaging is a way of simplifying complex data. Fortunately, there are much more sophisticated statistical techniques readily available. We can identify clusters, we can do principle component analysis, and so on. If we wanted to find a more realistic average, we can think in terms of a "medoid," which is the real data point nearest the mean of the cluster.

My guess is that bringing more sophisticated statistical thinking to real-world problem is partly a coordination problem. Non-statisticians don't really understand how to do statistical thinking, and may simply not be able to afford a specialist; yet they are required to make decisions based on statistics regularly. Statisticians may in turn not have the power or expertise to translate their statistical insights into cost/benefit analysis and implement a plan of action. Unfortunately, the result is that important decisions are made by people with minimal statistical knowledge, who default to averages and making faulty assumptions based on correlations. It's "folk statistics,"  and it's everywhere.