The dominant belief on this site seems to be in the "psychological unity of mankind". In other words, all of humanity shares the same underlying psychological machinery. Furthermore, that machinery has not had the time to significantly change in the 50,000 or so years that have passed after we started moving out of our ancestral environment.

In The 10,000 Year Explosion, Gregory Cochran and Henry Harpending dispute part of this claim. While they freely admit that we have probably not had enough time to develop new complex adaptations, they emphasize the speed at which minor adaptations can spread throughout populations and have powerful effects. Their basic thesis is that the notion of a psychological unity is most likely false. Different human populations are likely for biological reasons to have slightly different minds, shaped by selection pressures in the specific regions the populations happened to live in. They build support for their claim by:

  • Discussing known cases where selection has led to rapid physiological and psychological changes among animals
  • Discussing known cases where selection has led to physiological changes among humans in the last few thousand years, as well as presenting some less certain hypotheses of this.
  • Postulating selection pressures that would have led to some cognitive abilities to be favored among humans.

In what follows, I will present their case by briefly summarizing the contents of the book. Do note that I've picked the points that I found the most interesting, leaving a lot out.

They first chapter begins by discussing a number of interesting examples:

  • Dogs were domesticated from wolves around 15,000 years ago: by now, there exists a huge variety of different dog breeds. Dogs are good at reading human voice and gestures, while wolves can't understand us at all. Male wolves pair-bond with females and put a lot of effort into helping raise their pups, but male dogs generally do not. Most of the dog breeds we know today are no more than a couple of centuries old. There is considerable psychological variance between dog breeds: in 1982-2006, there were 1,110 dog attacks in the US that were attributable to pit bull terriers, but only one attributable to Border collies. Border collies, on average, learn a new command after 5 repetitions and respond correctly 95 percent of the time, while a basset hound needs 80-100 repetitions for a 25 percent accuracy rate.
  • A Russian scientist needed only forty years to successfully breed a domesticated fox. His foxes were friendly and enjoyed human contact, very unlike wild foxes. Their coat color also lightened, their skulls became rounder, and some of them were born with floppy ears.
  • While 50,000 years may not be enough for new complex adaptations to develop, it is enough time for them to disappear. A useless but costly adaptation will vanish in a quick period: fish in lightless caves lose their sight over a few thousand years at most.
  • An often-repeated claim is that there's much more within-group human genetic variation than between-group (85 and 15 percent, to be exact). While this is true, the frequently drawn conclusion, that phenotype differences between individuals would be larger than the average difference between groups, does not follow. Most (70 percent) of dog genetic variation is also within-breed. One important point is that the direction of the genetic differences tends to be correlated: a particular Great Dane may have a low-growth version of a certain gene while a particular Chihuahua has a high-growth version, but on the whole the Great Dane will still have more high-growth versions. Also, not all mutations have the same impact: some have practically no effect, while others have a huge one. Since the common ancestry of humans (or dogs) is so short, observable differences between populations must have evolved rapidly, which is only possible if the mutations had a strong selective advantage.
  • There are gene variants causing observable differences in appearance between human populations, such as the ones causing light skin color or blue eyes. For such systematic differences to appear, there must have been big effects on fitness, anything up from a 2 or 3 percent increase. From the rate at which new alleles have spread, this must be the case at least for genes that determine skin color, eye color, lactose tolerance, and dry earwax.
  • Molecular genetics has found hundreds of cases of mutations that indicate recent selection. Many of them are very recent. A significant number of Europeans and Chinese bear mutations that originated at about 5,500 years ago. The rate at which new mutations have been popping up and spreading over the past few thousand years is on the order of 100 times greater than the long-term rate over the past few million years.

The second chapter of the book is devoted to a discussion about the "big bang" in cultural evolution that occured about 30,000 to 40,000 years ago. During that time, people began coming up with technological and social innovations at an unprecedented rate. Cave paintings, sculpture and jewelry starting showing up. Tools made during this period were manufactured using materials hundreds of miles away, when previously they had been manufactured with local materials - implying that some sort of trade or exchange developed. Humans are claimed to have been maybe 100 times as inventive than in earlier times.

The authors argue that this was caused by a biological change: that genetic changes allowed for a cultural development in 40,000 BC that hadn't been possible in 100,000 BC. More specifically, they suggest that this could have been caused by interbreeding between "modern" humans and Neanderthals. Even though Neanderthals are viewed as cognitively less developed than modern humans, archeological evidence suggests that at least up to 100,000 years ago, they weren't seriously behind the modern humans of the time. Neanderthals also had a different way of life, being high-risk, highly cooperative hunters while the anatomically modern humans probably had a mixed diet and were more like modern hunter-gatherers. It is known that ongoing natural selection in two populations can allow for simultaenous exploration of divergent development paths. It would have been entirely possible that the anatomically modern humans interbred with Neanderthals to some degree, the Neanderthals being a source of additional genetic variance that the modern humans could have benefited from.

How would this have happened? In effect, the modern humans would have had their own highly beneficial alleles, in addition to which they'd have picked up the best alleles the Neanderthals had. Out of some 20,000 Neanderthal genes, it's highly likely that at least some of them were worth having. There wasn't much interbreeding, so Neanderthal genes with a neutral or negative effect would have disappeared from the modern human population pretty quickly. On the other hand, a beneficial gene's chance of spreading in the population is two times its fitness advantage. If beneficial genes are every now and then injected to the modern human population, chances are that eventually they will end up spreading to fixation. And indeed, both skeletal and genetic evidence shows signs of Neanderthal genes. There are at least two genes, one regulating brain size that appeared about 37,000 years ago and one playing role in speech that appeared about 42,000 years ago, that could plausibly have contributed to the cultural explosion and which may have come from the Neanderthals.

The third chapter discusses the effect of agriculture, which first appeared 10,000 or so years ago. 60,000 years ago, there were something like a quarter of a million modern humans. 3,000 years ago, thanks to the higher food yields allowed by agriculture, there were 60 million humans. A larger population means there's more genetic variance: mutations that had previously occurred every 10,000 years or so were now showing up every 400 years. The changed living conditions also began to select for different genes. A "gene sweep" is a process where beneficial alleles increase in frequency, "sweeping through" the population until everyone has them. Hundreds of these are still ongoing today. For European and Chinese samples, the sweeps' rate of origination peaked at about 5,000 years ago and at 8,500 years ago for one African sample. While the full functions of these alleles are still not known, it is known that most involve changes in metabolism and digestion, defenses against infectious disease, reproduction, DNA repair, or in the central nervous system.

The development of agriculture led, among other things, to a different mix of foods, frequently less healthy than the one enjoyed by hunter-gatherers. For instance, vitamin D was poorly available in the new diet. However, it is also created by ultraviolet radiation from the sun interacting with our skin. After the development of agriculture, several new mutations showed up that led to people in the areas more distant from the equator having lighter skins. There is also evidence of genes that reduce the negative effects associated with e.g. carbohydrates and alcohol. Today, people descending from populations that haven't farmed as long, like Australian Aborigines and many Amerindians, have a distinctive track record of health problems when exposed to Western diets. DNA retrieved from skeletons indicates that 7,000 to 8,000 years ago, no-one in central and northern Europe had the gene for lactose tolerance. 3,000 years, about 25 percent of people in central Europe had it. Today, about 80 percent of the central and northern European population carries the gene.

The fourth chapter continues to discuss mutations that have spread during the last 10,000 or so years. People in certain areas have more mutations giving them a resistance to malaria than people in others. The human skeleton has become more lightly built, more so in some populations. Skull volume has decreased apparently in all populations: in Europeans it is down 10 percent from the hight point about 20,000 years ago. For some reason, Europeans also have a lot of variety in eye and hair color, whereas most of the rest of the world has dark eyes and dark hair, implying some Europe-specific selective pressure that happened to also affect those.

As for cognitive changes: there are new versions of neurotransmitter receptors and transporters. Several of the alleles have effects on serotonin. There are new, mostly regional, versions of genes that affect brain development: axon growth, synapse formation, formation of the layers of the cerebral cortex, and overall brain growth. Evidence from genes affecting both brain development and muscular strength, as well as our knowledge of the fact that humans in 100,000 BC had stronger muscles than we do have today, suggests that we may have traded off muscle strength for higher intelligence. There are also new versions of genes affecting the inner ear, implying that our hearing may still be adapting to the development of language - or that specific human populations might even be adapting to characteristics of their local languages or language families.

Ruling elites have been known to have far more offspring than those of the lower classes, implying selective pressures may also have been work there. 8 percent of Ireland's male population carries an Y chromosome descending from Niall of the Nine Hostages, a high king of Ireland around AD 400. 16 million men in central Asia are direct descendants of Genghis Khan. Most interestingly, people descended from farmers and the lower classes may be less aggressive and more submissive than others. People in agricultural societies, frequently encountering lots of people, are likely to suffer a lot more from being overly aggressive than people in hunter-gatherer societies. Rulers have also always been quick to eliminate those breaking laws or otherwise opposing the current rule, selecting for submissiveness.

The fifth chapter discusses various ways (trade, warfare, etc.) by which different genes have spread through the human population throughout time. The sixth chapter discusses various historical encounters between humans of different groups. Amerindians were decimated by the diseases Europeans brought with them, but the Europeans were not likewise decimated by American diseases. Many Amerindians have a very low diversity of genes regulating their immune system, while even small populations of Old Worlders have highly diverse versions of these genes. On the other hand, Europeans had for a long time difficulty penetrating into Africa, where the local inhabitants had highly evolved genetic resistances to the local diseases. Also, Indo-European languages might have spread so widely in part because an ancestor protolanguge was spoken by lactose tolerant herders. The ability to keep cattle for their milk and not just their flesh allowed the herders to support larger amounts of population per acre, therefore displacing people without lactose tolerance.

The seventh chapter discusses Ashkenazi Jews, whose average IQ is around 112-115 and who are vastly overrepresented among successful scientists, among other things. However, no single statement of Jews being unusually intelligent is found anywhere in preserved classical literature. In contrast, everyone thought that classical Greeks were unusually clever. The rise in Ashkenazi intelligence seems to be a combination of interbreeding and a history of being primarily in cognitively challenging occupations. The majority of Ashkenazi jews were moneylenders by 1100, and the pattern continued for several centuries. Other Jewish populations, like the ones the living in the Islamic countries, were engaged in a variety of occupations and do not seem to have an above-average intelligence.

New Comment
162 comments, sorted by Click to highlight new comments since: Today at 2:01 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Very nice summary--thanks.

@SilasBarta:re our careers:

I would certainly never encourage a graduate student to follow up in this area because it would be a career kiss of death. But I am at retirement age, no one is going to fire me, and most important of all I do not have federal grant support. Cochran is not an academic: his real career is in laser physics. So we enjoy a kind of freedom that few academics do.

@JanetK re skin color:

According to standard ag-sci 101 theory the number of loci makes no difference at all to the speed of change of a multi-locus trait. Six is close enough to infinity that skin color should change no faster than, say, IQ. OTOH you may be right in the real world because of the complexities of epistasis of loci.

Welcome to Less Wrong! Note that there are threaded comments here - you can click 'reply' on the bottom of any comment.
Cochran was a laser physicist who came to dabble in the biology of infectious diseases with Paul Ewald. He is now an anthropologist at the University of Utah. Harpending is as well, and has been for some time.

Harpending is as well, and has been for some time.

...pretty sure you're talking to him.

Doh! I should really not rush so much.

(EDITED TO ADD: Do not reply to this comment. There is now a top level post for Q&A with the authors of the book which is a better place for you to post your questions than here. The text below is being left "as is" for historical purposes.)

Henry Harpending, one of the authors of the book being reviewed, has already posted a comment here. In order to maximize the value of his attention, I requested and received his permission by email to post this.

The goal is to have a relatively clean "Q&A" grow out of this comment, with interesting child questions posted by members of the community and grandchild answers posted by Henry Harpending or Gregory Cochran which a reader can easily peruse.

If you have any questions for either Harpending or Cochran, please reply to this comment with a question addressed to one or both of them. Material for questions might be derived from their blog for the book which includes stories about hunting animals in Africa with an eye towards evolutionary implications (which rose to my attention based on Steve Sailor's prior attention).

Please do not kibitz in this Q&A... instead go to the kibitzing area to talk about the Q&... (read more)

People in agricultural societies, frequently encountering lots of people, are likely to suffer a lot more from being overly aggressive than people in hunter-gatherer societies. Rulers have also always been quick to eliminate those breaking laws or otherwise opposing the current rule, selecting for submissiveness.

On the other hand, submissiveness is surely selected against in rulers, who as noted in the posting leave more descendants than proles. So perhaps in a society in which the strong rule and the weak submit there is some evolutionarily stable distribution along a submissive/aggressive spectrum, rather than favouring one or the other?


Personally, I think that the most relevant variation in humans is the existence of people with Aspergers Syndrome. This is a genuine cognitive difference that makes people with AS have different conceptions of axiology than neurotypicals. Ironically, though Eliezer speaks of the psychological unity of humankind leading to axiological convergence, it is precisely the fact that people with (perhaps mild) AS are more attracted to not compartmentalizing and thinking in terms of a consequentialist morality that has created the singularitarian movement.


After thinking about this I'm not sure AS entails an attraction to consequentialist morality so much as it does an attraction to consistent, axiom-based and systematized theories plus a willingness to ignore (or a lack of) situational and emotional reactions that contradict their systematized view. Consequentialism is just the obvious consistent and systematized view suggested by contemporary post-Enlightenment Western culture. I mean, unless the autism spectrum was empty prior to Bentham it seems likely people with AS were engaged in convoluted theological arguments and Natural law ethics during the middle ages. It is plausible Kant himself had AS. The only difference is that he was ignoring the intuition that it is okay to lie to compliment grandma's poor cooking and to keep a murder from killing your friend where is today, people are ignoring the intuition that it isn't okay to push the guy onto the track to stop a trolley or carve up the homeless guy for his organs.

I'd predict you'd see over-representation of AS among the followers of other contemporary philosophies that are highly consistent and axiom-based but also at odds with majority intuitions: For example, libertarian rights-based morality and Objectivism.

Yes, I agree. But this means that you will, in fact, see an empirical correlation between AS and consequentialism, and this is interesting and important; for example, it is a case where human cogno-diversity significantly affects axiology.

Yes, and I often see stark examples of how this difference in psychology reveals itself. It typically involves a NT joking about the observed behavior of an AS, where the "funny" bit takes the form, "[AS person] performed [action X], when you're OBVIOUSLY supposed to do ~X, though I am completely incapable of saying how ~X inexorably follows as the right one based on typical social experience."

Real example (some details may be off) that's representative of what I see a lot: "Yeah, there's this real weird kid in this class I teach who had read about the Protestant Reformation, but get this -- he actually pronounced it 'pro-TEST-ant'! It was SO funny [because obviously English has a really rigorous orthography that's designed to prevent this kind of thing]!"

I would like to see Eliezer Yudkowsky address the issues raised by NT/AS and by this book, because his position does have a lot of tension with it, even if there's no direct contradiction. (I'm guessing he can dismiss the NT/AS issues a being relatively small in the grand scheme of things.)

It was SO funny [because obviously English has a really rigorous orthography that's designed to prevent this kind of thing]!"

I'm pretty sure that's not how that sort of neurotypical is thinking. It's more like "of course everyone is always alert to get the social details right, and it's shocking incompetence to fall down on the job!".

If so, we're back to psychological unity of the human race-- geeks sneering at people who can't manage to understand completely obvious things about computers are showing the same lack of imagination.

My point was that such instances reveal psychological diversity, and the characterization of such a mistake as incompetence is the proof of diversity, so I don't see how that contradicts my point. With psychological unity, people might still see it as funny (maybe because pro-TEST-ant is a weird sound), but not on the basis of it revealing incompetence. If you're saying that geeks laugh at how non-geeks fail to make the correct inference about computers from the same experience, that looks like more evidence of psychological diversity.

My point was that it's common for people to think of their own skills as normal, and to think it's ridiculous when other people don't have those skills.

The skills may be different, but the assumption that everyone should have at least moderate skill at what comes easy to you is the same.

And the belief that "Y, rather than Z is the obvious inference given X " is different across people, and is evidence of psychological diversity, and is the case frequently, including here. The universal presence of a belief of the form "You should have moderate skill at X" does not contradict this.
If Oceanians consider Eur...Eastasians, their mortal enemies, unworthy of human dignity, and Eastasians regard Oceanians, their hated antagonists, as little more than maggots to be crushed, then that is not an example of psychological diversity; instead, it's two different instances of underlying psychological unity - in this case, of the universal "Us vs. Them" heuristic.
But this doesn't map to an "us vs. them" heuristic; it maps to an "X implies Y vs. X implies ~Y". The fact that the differing beliefs about what X implies leads to a universal dislike of the "other" does not deny the neurodiversity in the former heuristic.
Yes but the term "psychological unity" is about hardware. Neurodiversity in terms of magazine selection does not necessarily have a genetic link even though it will show that we are neurodiverse. Difference in magazine selection can lead to a difference in what one believes X implies.
People are not necessarily born with their current skill set, though, yes?
Upvoted for pointing this out.

Perhaps someone could outline the perceived tension in more detail? We already knew humans weren't identical. So just how much variation is how much of a problem for what?

Wait, what? Can you give some references on Aspergers => different axiology (new word for me)? And how does Aspergers => consequentialist morality, and how does not compartmentalizing + consequentialism => singulatarianism? It sounds interesting, but there are 3 unsupported and dubious-sounding links in that chain.
See my other comment in this thread for a link to a paper. Also, anecdotal evidence: I have often seen that people who support more consequentialist inferences display lower social skills and abilities. This is the proposition that if you are an altruistic consequentialist, and you search heard for the most important charitable cause, you'll find that Singularitarianism is it. This is defended in detail all over the Singularity Institute site. Most people are not consequentialists, so the first inference implies this one.
The "theory theory of mind" says that autistics lack the ability to simulate someone else's reasoning. If this is true, and Asperger's is like autism, people with it might be likely to judge people on the basis of consequences, since they have no model of other peoples' intentions. Although, now that I think about it, if someone has no cognitive model, but just observes a large set of instances of and finds a way to classify them as "good(action)" or "bad(action)"; and if situation + action usually determines outcome; is there any difference between being a consequentialist (making a lookup table of outcome -> action), or a deontologist (making a lookup table of situation -> action)? I still don't understand the connection between Asperger's and compartmentalizing. How does that argument rely on you being a consequentialist? Other ethical systems have to do with, eg., measuring intended consequences instead of actual consequences, not ignoring consequences.
Consequentialism measures intended or rather expected consequences. That's why you do expected utility maximization. That's because consequentialism emphasizes forward looking analysis, rather than backward looking blame-allocation. De facto, other ethical systems tend to not pay attention to the size of consequences, and tend not to involve doing mathematics to work out the best action. They tend to emphasize virtue, following "what you know in your heart is right", etc.
The biggest dividing-line that I've observed between value systems is between people who believe that a decision was right if it produced good consequences; and people who believe that a decision was right if, given the information available when the decision was made, it was expected to have good consequences. If both are consequentialism, then what terminology do you use to distinguish them?
Voted up because I think AS is a great example of psychological diversity. I'm curious however as to the origin of your belief that AS people are more attracted to decompartmentalization than neurotypicals are.
See the following for weak evidence that AS tend to be more "utilitarian": "Furthermore, when function in the RTPJ is disrupted using a technique called transcranial magnetic stimulation (TMS), moral judgments reflect a reduced influence of mental states and a greater influence of outcomes: unintentional harms are judged as more forbidden, and failed attempts to harm are judged as more permissible (Young, Camprodon, Hauser, Pascual-Leone, & Saxe, submitted). This pattern mirrors that observed in individuals with Asperger’s Syndrome and five-year-old children, as described above." My evidence for AS-types to be less compartmentalizing is informal, from metting and talking to people. It is well known that AS <==> more logical, and it seems that logical utilitarian people are more likely to not morally compartmentalize (e.g. think that destruction of the world is OK, but be horrified by the death of a particular person).

I've picked up some anecdotal evidence for that over the past few months. Just a week ago I was talking with one guy with AS about some ethics problems; he brought up an example where you're with 20 other people, including a baby who won't stop crying, hiding from an approaching army. Under some simplified assumptions, if the baby keeps crying, the army will find and kill all of you, and if the baby stops, they probably won't. If killing the baby is the only way to stop it, is it moral to do so? The consequentialist answer seemed obvious to both of us, even when he specified that the army would spare the baby's life but kill the rest of you. He told me that this is a characteristically autistic way of thinking about moral problems, and he's had more contact with autistic/AS people than I have (aside from being one himself), so I'm inclined to believe him. (I'm not AS myself, but I'm apparently close enough that several people at several points in my life have suspected it, but not enough to be diagnosed with it.)

Edit: He wasn't sure about torture vs. dust specks, but that seemed to be more because he didn't see how a problem involving such impossibly huge numbers of people could have any useful implications about more realistic ethical scenarios. I disagreed — the math is the same, and I think pathological cases are useful for testing the integrity and consistency of ethical theories and for testing how seriously a person takes the theory/methodology they profess to follow — but he didn't find that particular point to be relevant.

Yes, neurotypicals are flummoxed by these types of problems. Others include the trolley problems, and the organ donation problem.

Are you sure "flummoxed" is the right word? I don't think "neurotypicals" are confused by the mathematics involved. They just dispute that the utilitarian math represents an accurate theory of ethics. Would you use the word "flummoxed" for a physicist who understands the mathematics of a theory but disputes that it says anything relevant about the real world, even if he has no alternative theory to offer?

For full disclosure, I am not convinced by utilitarian arguments at all, both in these problems you mention and in most other widely disputed ones. I understand them with perfect clarity; I just dispute that they have any relevance beyond the entertainment value of the logical exercise, and possibly propaganda value for some parties in some situations. I certainly wouldn't describe my situation as "flummoxed."

Usually they end up being morally dumbfounded or end up trying to not bite enough bullets. E.g. wanting to support some deontological principle but not biting the bullets that go with it, or adopting principles that contradict each other. Many neurotypicals I have spoken to will take really extreme positions on the fat man trolley problem, saying that they wouldn't push the fat man off the bridge even if a million people were on the trolley. Yes, but I'll be darned if you're neurotypical ;-0

On the other hand, don't forget that talk is cheap, and actions speak louder than words. I doubt that many utilitarians would be willing to follow their conclusions in practice in situations such as the fat man/trolley problem. To stress that point even further, imagine if you had to cut the fat man's throat instead of just pushing him (and feel free to increase the cost of the alternative if you think this changes the equation significantly relative to pushing). I'd bet dollars to donuts that a large majority of the contemporary genteel utilitarians couldn't bring themselves to do it, no matter how clear the calculus that -- according to them -- mandates this course of action.

This suggests to me that this "dumbfoundedness" might be in fact a consequence of more clear and far-reaching insight, not confusion. Biting moral bullets is easy in armchair discussions; what you'd actually be able to bring yourself to do is another question altogether. Therefore, when I see people who coolly affirm the logical conclusions of their favored formal ethical theories even when they run afoul of common folks' intuition, I have to ask if they are really guided by logic to an exceptiona... (read more)


The point here is that logical consistency in ethical armchair discussions could in fact be a consequence of myopia, not logical clear-sightedness

You're allowed to say "X is the action I would want to take, but I wouldn't be able to"

I don't think this statement is logically consistent. Unless you're restrained by some outside force, if you don't do something, that means you didn't want to do it. You might hypothesize that you would have wanted it within some counterfactual scenario, but given the actual circumstances, you didn't want it. The only way out of this is if we dispense with the concept of humans as individual agents altogether, and analyze various modules, circuits, and states in each single human brain as distinct entities that might be struggling against each other. This might make sense, but it breaks down the models of pretty much all standard ethical theories, utilitarian and otherwise, which invariably treat humans as unified individuals. But regardless of that, do you accept the possibility that at least in some cases, bullet-biting on moral questions might be the consequence of a failure of imagination, not exceptional logical insight?
It's not always that simple. It would be inconsistent if our actions could be reduced to a simple utility function and we consistently used the word (and emotion) "want" to refer to actions that maximize that utility function, but neither of those are the case, because we're not intelligently-designed optimization processes. Our brains don't act under a single unified goal system, and very often the part of us that says it wants to do x, or the part that believes it wants to do x, or the part that would be happy if it could do x, or the part that feels bad if it doesn't do x — any of the parts where it feels like "wanting" rather than "doing" — isn't always the part that makes the decision. (In fact, in a direct causal sense, I'd say it's not the part that makes the decision, period. Sometimes it just seems like they're the same when they're properly synchronized.) Neither is the part that makes moral judgments on one's own actions and on other's actions, and so on. Have you read any of the discussions of akrasia here? That's essentially shorthand for what we're talking about here (wanting to do something but not doing it), and if you are willing to discuss it on human terms — in terms of what humans actually mean when they say "want" rather than what a single-minded decision-theoretic reasoner would mean by it* — then such discussions can be quite fruitful, and not logically inconsistent or meaningless at all. * If such an agent would say it at all, that is. It could be taken as a mistranslation, in the same sense that Eliezer says translating any of the Babyeaters' words about their own decisions as "right" would be a mistranslation. If a perfect decision-theoretic agent's utility function specifies some action, then by definition, it will automatically pursue that; there's no room for any "wanting" there, just deciding and doing. Indeed, the very fact that we have different words for "want" and "pursue" reflects the reality that we can and very frequently do on
ata: Yes, I've read lots of stuff written about akrasia on this blog. This would be a topic for a whole separate discussion, but to put it as briefly as possible, in general I'm highly suspicious of such concepts. I view them through what Bryan Caplan calls the "Gun-to-the-Head Test" (I had actually come up with the exact same argument independently before I read about it from Caplan): Note how different this is from people who have no control of their behavior even under this test. A Parkinson patient can't stop shaking his hands, and a person with normal nerves can't refrain from the knee jerk when struck into the patellar ligament, no matter what you threaten them with. Ultimately, I believe that people engage in akrasia and "addictive" behaviors because they sincerely want it. Procrastination and substance abuse are fun and pleasant, and may well be worth a large cost for those sufficiently fond of them. And if these people can subsequently claim that their socially disapproved behaviors were somehow against their will and this way lower their cost by assuaging their reputational consequences -- well, no wonder that such excuses are popular. Saying that you would "want" to avoid procrastination is just ritual signaling behavior, just like smokers saying that they "want" to quit. I should add that this is a complex topic, to which this brief post doesn't do justice, but this does summarize my view on the matter.
Yes, your comment establishes that there exists a reason to make the following classifications: a) going for a jog when you say you want to go for a jog and like the health benefits and feel good while jogging -> preference for jogging and b) smoking despite saying you don't want to smoke and being aware of the bad consequences for your body and lifespan and wishing smoking did not give you short-term pleasure (and vice versa) -> preference for smoking However, to get to the root of the akrasia question, that's not enough. You would need to show that there is no significant, useful difference between those "preferences" that would justify having different labels for them. Do you really believe that the same kind of "preferring" is going on in a) as in b)?
SilasBarta: I don't have a complete theory of akrasia and related behaviors; in fact, I don't think we know enough about these issues yet to say the final word. However, from what I've observed, I do think that the preferences in (a) and (b) are essentially the same, though of course the details of the cost/benefit calculus are different. The relevant difference between them lies in their social signaling consequences, not in the nature of the preferences as such. In the contemporary culture, exercise carries positive signals, so if you exercise, it is, if anything, something to brag about. Smoking carries negative signals, so it's in your interest to present it as something you can't control. My further (and more controversial) relevant observation is that the contemporary public and expert opinion is biased in favor of claims of helplessness and victimhood. Thus, for example, as smoking is considered more and more immoral, smokers will be judged less negatively if they claim to be helpless addicts swindled by the predatory tobacco industry than if they just say "I like it, and it's none of your business." Similarly, people who prefer the pleasures of drinking and drugs will be viewed less judgmentally if they plead "addiction" than if they just admit that they accept the costs of these pleasures, which can sometimes be very large. (Note the change in their behavior when the cost is greatly increased in the gun-to-the-head test!) To make such a plea, however, you need to suffer from an officially approved "addiction." You can't successfully plead helplessness and victimhood if you suffer from the urge to write blog comments instead of doing work, even though many people will testify that this urge can be far greater than the lure of officially "addictive" behaviors. (Though this might change in the future as the concept of "internet addiction" gains official circulation.) In any case, the important point here is that when you're tempted to claim that someone hon
Please don't misunderstand. I'm very sympathetic toward that view, and I agree it can explain a great number of cases. Many of the specific points you made there I agree with as well, especially about Tiger Woods and "sex addiction". I've also written diatribes (that I won't dredge up) about how people go great lengths to rationalize consumption of alcoholic drinks to make them socially acceptable, when really they just want to get high. Heck, I've even tried, long ago, getting myself addicted to legal substances that are on the socially-endorsed "addictive" list, and failed. What I dispute is that it's a full explanation applicable to all asserted cases of akrasia. For example, it runs into these problems: If you did a gun-to-the-head test on the chain smoker and the jogger over an extended period, * the smoker would go through a kind of negative psychological stress not present in the jogger. * after a long enough time, the smoker would lose the urge to smoke, and thank the gunman[1] for having used such coercion, while the jogger would stay resentful. * ETA: the jogger would probably return to jogging thereafter, while the smoker would not return to smoking, even in private. Also, it would require that we make no distinction between "this person is doing X because it is painful not to" vs. "this person is doing X because it is pleasurable". Our own psychological experience tells us that there is a difference between pleasure and the absence of pain, even if that difference is not relevant in every context. (Remember, rejection of the akrasia concept requires that you believe it is never a relevant distinction, not just that it's an unnecessary distinction in some contexts.) Furthermore, it's highly probable that people dislike the impacts of e.g. smoking/drugs on them, above and beyond the social disapproval it brings on them, especially when e.g. it takes a smoker 20 cigarettes to get a minor buzz. The above considerations keep me from cynically dismis
First, I would note that as far as I see, the above model is applicable to a much smaller range of behaviors than commonly believed. More specifically, I think the level of "withdrawal pain" is commonly greatly exaggerated for all but the most extreme physical addictions, like heroin or very extreme alcoholism. And even for these extreme cases, when the relatively brief period of physical withdrawal is bridged, the memories of past pleasures remain a constant temptation; relapses are a notorious problem in all sorts of substance abuse cases. This, I think, shows that even for true physical dependences, a large part of the motivation is seeking pleasure, not avoiding pain. Thus, for most forms of alleged akrasia, I do think the cynical dismissal is correct even if I grant your above objections, since the pain of quitting is not so high as to be truly relevant. Smoking is a prime example, which I conclude both from my personal experience with quitting and from the apparent ease with which smokers conform to the now ubiquitous smoking bans under which many of them spend most of their waking hours. (Apparently, far lesser threats than the gun-to-the-head test are more than enough!) It definitely seems to me that non-relapsing ex-smokers are those who came to realization that the costs exceed the benefits, not those who successfully bridged a temporary withdrawal pain period. But otherwise, yes, I grant that your above description could be accurate for some behaviors. However, someone who believes he'd benefit from quitting, but lacks the willpower to endure the withdrawal pain, can make the arrangements to be restrained during that critical period. This indeed happens when people check into rehabs. Yet in reality, bridging the painful withdrawal period is by no means a guarantee against relapse. Now, you say: I accept the difference in case of a heroin addict who will pass through a few days of torment if he doesn't shoot up, or a delirium tremens-level alcoholic. Bu
Because this exchange is getting complex, and because of the lopsided votes, and because of the lack of involvement of others, I'm going to wait for others to comment on our exchange or for our comments to receive more moderations before replying, just as a "sanity check" that we're making progress in our disagreement. (Pardon the long sentence.)
I think you're righter than Vladimir_M, but some Rationalists' Taboo would be useful. "Preference." Can both of you formulate your views without using that word? And "akrasia". To me, the obvious distinction between the joyful jogger and the frustrated smoker is that the smoker has a conflict and the jogger does not. The smoker has both a goal of smoking and a goal of not smoking, and the processes for achieving these goals are fighting each other. It is impossible for both goals to be fulfilled, and as long as both processes are active, dissatisfaction will result. The jogger has a coherent set of motivations for a single goal. The issue of signalling is a red herring. The smoker can be just as frustrated if no-one but himself knows of his struggle, and the jogger just as joyful if no-one ever sees him going for his 5am run. St. Augustine had his struggles before ever writing about them. Imputing signalling behaviour always sounds to me like just whining.
Thanks for your input, and I agree with your distinction along the lines of conflicting preferences. I believe I already have implicitly formulated my views with a taboo on akrasia and preference. In my last substantive reply, I basically said that there are two kinds of phenomena going on, as seen by several significant differences, justifying a different term for each one (because they occupy such different clumps of conceptspace). And whatever those terms are, some contexts certainly do justify distinguishing between the two. The specific differences I stated are that one would involve "retroactive consent" while the other wouldn't; a long-term period of coerced abstinence would induce psychological stress in one but not the other, and it would permanently alter the target's behavior in one case but not the other. Preference, akrasia, whatever. Two different things are going on, warranting different actions in response. Yes, people lie about addiction for sympathy. A lot. But that doesn't make it all a scam.
Unfortunately, I'll be too busy to write anything more than this comment until (at least) tomorrow, and the discussion is indeed getting complex and buried ever deeper into the comment thread, so I'm not sure if we'll be able to continue. But in any case, I think it's been a worthwhile exchange, and it has made me rethink my positions on these issues. As a final observation, I'll just briefly address this comment of yours: I agree with this, and in retrospect, I see that due to my own hasty writing and lack of clarity, my comments could have been read as denying this distinction altogether, which was not my intention. Therefore, I think our true disagreement has been about: (1) how widely your "smoker vs. jogger" model is applicable in practice (and in particular, whether it is applicable to typical smokers who plead addiction), and (2) how widely the signaling explanation is applicable instead (i.e. the case where one falsely pleads one's supposed inability to suffer the withdrawal pains to gain the more respectable victim/sufferer status instead of being condemned for practicing vice willingly). Maybe my impressions in this regard are biased, perhaps by my personal experiences. For all I know, I might be an atypical individual in this regard; but then, from many anecdotal observations, I have the impression that people around me have often played the above described signaling game, to the point where I see it as a general rule. So at the end, we can probably settle for an empirical disagreement whose resolution would require detailed discussions of a large, representative set of concrete situations, to see how far these alternative explanations are applicable in practice.
It looks like there is indeed quite of bit of overlap between our views. I haven't had much experience with people using the "addiction" excuse, but I recommend you approach the topic using a broader definition, as I do in this blog post (which I think you'll enjoy). Instead of looking at it from the perspective of, "Is this person just making some excuse so they can get away with irresponsibly continuing the addictive behavior?", look at it from the perspective of, "Does this person get strong urges to do something they know is bad for them, enjoy doing it, but also wish they didn't get those urges?" And then ask if that's a very special kind of "preference" (though I think you already agree now). Excerpt from the blog (emphasis added): And again, I believe the addiction excuse is heavily overused; I just don't think that resolves the akrasia issue.
Thanks for the link! I just posted a reply at your blog.
Thanks for the comment. I posted a reply with a link to another LW thread you might find interesting.
Related: The Medicalization of Everyday Life.
Stephen King (who is, incidentally, a former alcoholic) wrote a short story, "Quitters, Inc." with this as the premise. If they catch you smoking, they'll do horrible things to you and your family members.
It's not that much of a difference. Such a model could still accept that humans are unified individuals, but also attached to parts (defined as not the relevant part of the human) that interfere with the human's actions. Roko's alternative is just to say, "X is that action that I would attempt; hardware inextricably connected to me would also stop me from doing X." Of course, that does run into problems like, "So you agree that you're running on corrupted hardware that stops you from doing what you believe is morally right -- why should I trust you, then?

This might make sense, but it breaks down the models of pretty much all standard ethical theories, utilitarian and otherwise, which invariably treat humans as unified individuals.

Except for very narrow definitions of "standard," this is just incorrect. Plato, Hume, Kant, and John Stuart Mill all understood and wrote about the difference between what they thought of as the rational or refined will and the more emotional appetite. Likewise Maimonides, St. Augustine, Epictetus, and a 16th century Taoist scholar whose name I can look up for you if it's actually important. In fact, an enormous part of standard ethics deals with the divergence between what we say is right and what we actually do, and tries to identify ways to help us actually do what we say is right.

The blanket assertion that anything you do without being physically restrained is what you wanted to do under the circumstances is a creature of 20th century free-market economics. While it can be part of a self-consistent moral philosophy (e.g. Ayn Rand's Objectivism), it's hardly a litmus test for sound ethical thinking. On the contrary, we should be deeply suspicious of any moral theory that tells us that whatever we do must be what we wanted to do, because it conveniently justifies a set of actions that we (apparently) find quite easy to carry out. What is easy is not always right.

Was this intended as a reply to the parent of my comment?
I was replying both to you and to Vladimir_M, because both of you seemed to me to be accepting the premise that humans (however defined) must be unitary actors in order to be amenable to coherent ethical accounts.
Understood, but just to be clear, I was only accepting that premise for purposes of argument, saying that you don't need to resort to non-unitary models to phrase Roko's position. I don't accept that premise as a general rule. (Or at least I recognize that this model quickly runs into problems -- see my exchange with Vladimir_M.)
Thanks for the link; it's an interesting dialogue. May I suggest, by way of constructive criticism, that when someone challenges you to play Rationalist's Taboo, you respond with a formal definition that uses few or no pronouns, regardless of whether you think you have already defined your terms well? E.g.: First-order preference (n): a desire for some state X that, if unopposed, usually leads to actions calculated or assumed to bring X about. Second-order preference (n): a meta-preference; a desire to have some particular ranking R of first-order preferences that, if unopposed, will usually lead to actions calculated or assumed to bring R about. Akrasia (n): the state of having a first-order preference A that conflicts with a second-order preference B such that A is stronger, and usually wins. Addiction (n): a subset of akrasia such that, if the person with akrasia were temporarily and forcibly prevented from acting on A, he/she would (1) be grateful AND (2) likely have a reduced preference for A in the future.
Thank you, that's a great formalism. Under your terminology, my position is that there is a difference between someone with addiction, vs. someone with consistent first/second-order preferences, and that this difference is so empirically significant as to justify having different terms, and that this difference is experimentally detectable (at least in hypothetical situations). Of course, your definitions define addiction by that experimental difference, and that's something I'd want to avoid. Vladimir_M's point, in turn, is that people with consistent first/second order preferences that are not socially acceptable try to persuade others it is actually a case of addiction in order to increase the net benefit of indulging that preference. I agree with him that this is often what's going on, but disagree that it can account for all cases, thereby necessitating the distinction of the separate category of akrasia (and addiction).
You are mostly right, except that I disagree that such simplifications are limited to 20th century economics. I had in mind formal ethical theories that I find discussed in modern analytical philosophy, and especially utilitarianism. I honestly don't see how utilitarianism can make sense unless humans are modeled as unified agents, each with a single utility function. From what I've seen, other popular formal consequentialist approaches make analogous assumptions, for which I don't see how they could be reconciled with dissolving the concept of humans as unified agents. But yes, considering the vast philosophical tradition you mention, my above statement definitely doesn't hold in general. However, to get back to the issue that started this discussion, I don't think that Aspergery logical consistency -- that, according to Roko, apparently makes for a good consequentialist ethicist -- would be a good guide through the works of the authors you mention!
Hm. We're a few levels down from the parent thread here, so please forgive me if I fail to focus on your main point. I'm aiming for it, but I might miss. It seems like you're saying that, in so far as we appear to observe a unified human psychology, it may just be because of myopia -- upon truly considering a moral dilemma in all its ugly ramifications, people would approve of and adopt different courses of action. That seems correct as far as it goes, but what if people's emotions and logic generally follow the same path? What if, upon reflection, all neurotypicals would agree that, ideally speaking, they would like to live in a world where people slit the throats of trolley-obstructors so that light rail would be safe enough to ride on, but each neurotypical individual also agrees that zie could never actually bring zerself to slit an innocent person's throat, because it would be too yucky? That still leaves us with the vast array of neuro-atypicals in our search for diversity, but then the question of whether humankind has a unified psychology is still interesting. Instead of the response being an obvious "no, we're diverse," the response becomes an investigation of how many atypicals there are, how different their opinions really are, and perhaps which ones are worth viewing as "healthy" enough to count. Let me just qualify that last remark. I believe there should be room for many different kinds of minds in our society, but that doesn't mean there's no such thing as mental illness. For example, a paranoid schizophrenic might have a different opinion about the trolley problem, but I'm not sure we should ask him -- maybe we should just offer him some antipsychotics and see if he calms down.
Tangent: The trolley problem actually seems like a relatively innocuous source of diversity. In terms of designing a world we would want to live in I think there is pretty broad agreement that we want our trolleys to not run out of control. Yes, the principles of the trolley controversy could end up leading to disagreement over something far more important but... right now there are people with deep, powerful desires central to their overall happiness, the fulfillment of which other people find morally repugnant and sinful. That strikes me as in many ways a much bigger problem that the deontology/consequentialism battle.
Well, OK, let's go ahead and flesh that out. I read your Rorschach blot and the first thing that comes to mind is gay sex vs. Christian fundies. Want to run with that for a few minutes? How does it illustrate psychological diversity? Isn't it just an example of how different beliefs about reality lead to different moral opinions about specific actions? If you could get a Christian fundamentalist to imagine a world where Jesus was just a charismatic preacher and sexual orientation was caused by genes and hormones, wouldn't she say that gay sex was OK in that world? For that matter, if you could get an openly gay atheist to imagine a world where the New Testament as it has been traditionally interpreted really was the literal word of a God who for some inexplicable reason was so cool that whatever God's opinions were automatically became morally correct, wouldn't the gay atheist say that gay sex was sinful in that world? Where's the gap in human psychology? Feel free to pick a different example if you had something else in mind. :-)
I was ambiguous because while gay sex and Christian fundamentalism does come to mind so does the entire gamut of pleasurable activities that people object to as wrong or impure. Sex with multiple partners, sex outside of marriage, polygamy, BDSM, homosexuality, paedophilic and ephebophilic fantasy, etc. And it isn't just Christian fundamentalists doing the condemning either. I don't know. While the justification given for the opposition to homosexuality is biblical I'm not confident the given justification is the motivation behind the conservative Christian opposition. To me, at least, the Haidt's concept of moral purity is what is really at work. And this helps explain the revulsion toward a wide range of sexual activities (which may or may not be discussed in the Bible) from people who may or may not have read the Bible. In addition to the above, it seems to me that for many, even most people, religion, morality and sex are all tangled up in the same memetic mess and that such people may not even have a proper map-territory conceptualization of the world. And this entangled collection of memes may not be the direct output of their psychology but I'm not sure any value system is, it is certainly the case that their psychology is extremely amenable to this collection of memes. And it seems very plausible to me that some people have psychologies more amenable to and comfortable with these memes than other people. And an interesting feature of these oppositions to desires is that they are, at least in part, cultural. It turns out you can turn down or even switch off the disgust instinct to at least some sexual behavior if you raise them right, teach them tolerance and have admirable television characters with these desires. I'm not sure the case is any different with disagreements in normative theory. Some minds are more amenable to consequentialism, others deontology, others virtue ethics, others are just confused. But there is no reason to think such minds begin
Yes! This is very enlightening; thank you for your thoughtful response. I am convinced, for now. :-)
I am a bit confused OTOH why non-ADHD people (without akrasia, a term I just learned here on this webssite) find such questions interesting at all. To me, no matter what "system of morals" you may have, it's mostly useless thinking, because it's not like what I do depends that much on what I actually want to do, in my self-awareness.
So true. That's what akrasia is. But I'd be surprised if there were people who didn't experience that at least a little bit.

Interesting. This implies that there are actually two ways of interpreting such moral dilemmas: either as A) "what would you actually do in this situation", or B) "what would be the right thing to do in this situation, regardless of whether you'd actually be capable of doing it".

I've always interpreted the questions as being of type B, but the way you write suggests you're thinking of them as being type A. I wonder how much of the disagreement relating to these questions is caused by differing interpretations.

It's more complicated than that. Most people would say that there are imaginable situations where a certain course of action is right, but they'd be strongly tempted to act differently out of base motives. For example, if you ask a typical person whether it would be right to gain a large amount of money by some sort of cheating, assuming you know for sure there won't be any negative consequences, they'll immediately understand that the question is about what's normatively right, not how they'd be tempted to act. Some very sincere people would probably admit that they might yield to the temptation, even though they consider it wrong.

Now, imagine you're introduced to someone who had the opportunity to cheat a business partner for a million dollars with zero risk of repercussions, but flat-out refused to do so out of sheer moral fiber. You'll immediately perceive this person as trustworthy and desirable to deal with -- a man who acts according to high principles, not base passion and instinct. In contrast, you'd shun and despise him if you heard he'd acted otherwise.

However, let's now compare that with the extreme fat man problem (where you'd have to cut the fat man's throat to aver... (read more)

I'm not sure "warm and fuzzy" is the right term, but ... I would feel a certain respect, and of course update my probability that they will fail to take the correct action out of bias or akrasia. And my probability that they will kill me. Would you be creeped out by someone who cheerfully admitted they would kill you if you turned evil? I mean mind-control type evil? Because in fiction at least that's treated as a good thing, but still creepy. (I think the creepiness is the fact that they can and will kill people, and there's the ever-present worry they might mistake you for a risk.)
I can believe that a neurotypical person would be more likely to imagine themselves doing the actual killing, while someone on the AS would be more likely to stay with the abstract problem.

Many neurotypicals I have spoken to will take really extreme positions on the fat man trolley problem, saying that they wouldn't push the fat man off the bridge even if a million people were on the trolley.

Eh, as I've argued before on LW, there are utilitarian, AS-compatible justifications for such a position: specifically, that your heroic act shuffles around the risk profiles of various activities in unpredictable ways, thus limiting the ability of people to manage risks, leading them to waste significant resources (perhaps exceeding the amount that would otherwise save more than a million lives) returning to their preferred risk profile.

The key part:

By intervening to push someone onto the track, you suddenly and unpredictably shift around the causal structure associated with danger in the world, on top of saving a few lives. Now, people have to worry about more heros drafting sacrificial lambs "like that one guy did a few months ago" and have to go to greater lengths to get the same level of risk.

In other words, all the "prediction difficulty" costs associated with randomly changing the "rules of the game" apply. Just as it's costly to make peo

... (read more)

It' isn't just about being fat while being on a bridge over trolley tracks, of course. It might be a worse world if people generally believed they should take deadly action when they see a utilitarian win.

Much less likely? That would require that such drafting be more likely on bridges than elesewhere (how often do these train accidents happen?) Also, ex ante one is more likely to find oneself one of the million saved than the one person sacrificing, so most everyone should agree to a policy that those in positions to offer incredible help be drafted.
The problem induced by pushing the fat guy off is that people don't know which zones now count as "sacrificial lamb" zones (because of the bizarreness of the deviation from social norms), except that bridges over densely-populated trolley tracks are one of them, so I think the resulting world meets this criterion. But people are already choosing risk profiles that, under present social norms, cause them to die when near tracks that have an errant trolley coming, so it's not clear why they'd make tradeoffs (giving up other things they value) for greater near-trolley safety, and thus not clear why they'd prefer this at all. In this case, the cost (borne by everyone in the area, not just people near tracks) is that they have to re-organize their lives around choosing routes that avoid sacrificial lamb zones. But -- by the scenario's stipulation -- people aren't currently choosing to bear the additional cost to be on the safer bridge rather than the dangerous track. (If they were, the scenario would involve millions crossing the bridge and few near the track.) What they are choosing is to bear the risk of death because of the convenience it affords. And because the option of pushing someone off the track tells people, "Okay, you have to be a lot more risk averse to get your current level of risk", they're forced to pay more for the same safety.
I was going to dispute your use of "flummoxed" as well but then I realized my position on normative ethics is basically an extended defense of moral dumbfoundedness and decided that I wouldn't be the best person to make that argument. I think anyone who is biting bullets and defending rational principles broadly applied is just more comfortable dropping intuitions (or holds them less strongly) and less comfortable with logical inconsistency (sound like anyone you know?). But I don't think that makes their claims about morality any truer than the dumbfounded. I disagree that the right answer to inconsistent intuitions is just deciding to pick some intuitions and ignore them.
Well you can't keep them all. You might adopt weakened versions of all of them, though.
You can keep all of them if you're okay saying that sometimes there are only immoral choices (or at least no moral ones) and that sometimes the action we ought to take is under-determined by our moral intuitions.
Yes, why should we assume that these difficult ethical conundrums have some sort of "right answer" at all? Why would asking about the "right choice" in trolley and similar problems necessarily have to have any more sense than asking about the "correct value" of 0^0?
That raises an obvious question: what do you actually do if you find yourself in a Sophie's choice, especially if the result of the null or default choice is more monstrous to you than the results of the other choices? Refusing to consider a class of decision theory problems is tantamount to precommitting to an unconsidered answer should one of them arise. Of course, in most cases, people actually do seem to consider horrific choices once they're actually faced with one; I therefore conclude that the popular response of refusing to make an analysis of such problems is more about signaling than anything else.
Well, the correct answer could be that I don't know what I would do -- and even if I knew that I would probably act in a certain way, it wouldn't be the outcome of any rational deliberation, but just a whimsical reflex from my brain overloaded with the stress of the situation. You'll probably agree that there are situations where this would be the only realistic answer. For example, suppose you were about to be shot in a minute and the executioner showed you two bullets and told you do choose which one will end up in your head, and also threatened to kill you in a more painful and gruesome way if you refuse to make your choice clear. What does any decision theory say about this situation? It's absurd to insist on a rational rule for decision-making here. Now of course, you can say that I chose an example where whatever the calculus, the numbers end up being equal, since the two options are identical in every relevant respect. But why should we believe that if only the options are sufficiently different, there must be a way to impose an ordering of desirability on them? Why wouldn't the "answer undefined" response be applicable in a much broader class of situations than just those where consequentialist calculations evaluate all options the same? What property of the universe or logic (or something else?) demands otherwise?
I agree: in some cases, one can't conclude which of two awful options is least bad (or one can conclude that the difference between them isn't likely to be worth the effort of investigating further, under the circumstances), and in that case, a random selection between such options is as good as any strategy. However, ISTM that most trolley problems don't fall into that category, and that a policy of refusing to consider them on principle is probably a signaling phenomenon (one doesn't want to appear to endorse killing the innocent, even in such a farfetched hypothetical).
That, however, is more likely to manifest itself in a decisive anti-utilitarian answer, not feigning indecisiveness. People who want to signal that they won't endorse killing the innocent will say that it's wrong to actively kill someone even if it saves other lives, so they wouldn't push the fat man etc. -- and usually this is an honest statement of how they would really act in practice. Expressions of moral intuitions that are loaded with signaling value are usually felt sincerely, and acted upon readily. Similarly, people who refuse to endorse any alternative -- who are, I believe, a small minority in the general public -- sincerely view the situation as akin to the bullet choice. It might be ultimately due to signaling, but note that among ordinary folks, this sends a very bad signal. It's not at all good to be perceived as morally indecisive and lacking in principles. That said, I'd say your theory is applicable to enthusiastic consequentialists too, and actually more so. I have the impression that many people who bite moral bullets based on various consequentialist theories do it for signaling value. They want to signal their rationality, adherence to logic rather than emotion, bravery in face of hostile reactions from people whose moral intuitions get violated, etc. In fact, I'd venture to say that the signaling here is more transparent, since unlike the never-kill-the-innocent folks, they likely wouldn't be ready to follow what they say in practice [*]. -- [*] - This doesn't contradict what I wrote above (that signal-loaded moral statements are typically acted upon readily), because these people are signaling to a very different audience than ordinary folks, to whom that statement applies.
IAWYC, except that being perceived as indecisive is only a downside when trying to appear high-status within a group. Signaling moral conflict and indecision among peers or superiors might not get you admired, but it's a safe choice when the options are ugly (until there's a group consensus and your conformity is sought). But yes, again, there's signaling in both directions, and that's all it amounts to for most of us talking about trolley. For some people (e.g. heads of state), though, these decisions actually have to be made now and then; I'd prefer that some systematic decision criteria exist for those cases; and I find it interesting to talk about them in the abstract.
... so which is less immoral?
I wonder if the higher rate of consequentialists here relative to the general population or the population of ethicists might be explained solely by differing rates of AS plus self-selecting consequentialists here because they have found kindred hearts. Have we ever polled for demographics on neurotypicality?
My experience from several LW meetups indicates that a significant number of LW readers are probably somewhere on the aspergers spectrum (even if on the very mild/high-function part of it), including me. [Though it is hard to quantify these things: as far as I can see there is some disagreement about what the core traits of AS are, for example do you have to have coordination problems and sensory irritability, or is it enough if you are highly logical and can't read body language or navigate social situations?]
This is an interesting thread. Admittedly, I've often thought to myself when reading LW posts: "this post was clearly written by someone with AS". If people with AS are drawn to sites like this, maybe that, in part, explains why there seems to be many more men here than women. I wonder if the male:female LW ratio is similar to the male:female AS ratio in the general population.
Autism in general affects four times as many men than women in the general population; but I've noticed that a surprisingly high proportion of the autistic "public figures" - given that ratio - are women. Temple Grandin, for instance, may be the most famous person with autism around; and a majority of the autism bloggers I've run across are female. I don't know why this is.

Autism in general affects four times as many men than women in the general population;

Does this statistic refer only to severe cases of autism that are likely to be noticed and diagnosed whenever they occur, or also to the milder, high-functioning autism spectrum disorders? Because if the latter, I would expect that mildly autistic men are much more likely to be noticed as weird and dysfunctional than women, so this might account for at least a part of the discrepancy in the rate of diagnosis.

The explanation for the greater public prominence (and presumably social acumen) of female autistics is probably similar. In most situations, it's probably harder for autistic men than women to avoid coming off as creepy or ridiculous.

Are the words "women" and "men" reversed in your opening sentence?
Yes, thank you, fixing that now.
Does "autism bloggers" mean "people who blog specifically about autism"? If so, it might be instructive to check how many bloggers in other subjects also happen to have autism. It might be dificult to verify but the blogosphere is large enough to dig up a usefully-sized sample and disentangle to some degree the autism-blogging link.
Yes, that's what I mean.
This seems like exactly the sort of attitude that would disappear in any reasonable preference extrapolation algorithm.
I don't have evidence for that proposition, but I wanted to (shamelessly) point out that attraction to decompartmentalization can be phrased as a willingness to go from Level 1 to Level 2 in my hierarchy. That is, to go from understanding domains independently, to checking for global consistency and multi-directional implication across them.
Yet tons of non-AS people are consequentialists, so maybe AS people just happen to have a head start in reaching a conclusion that informed people eventually reach anyway.
Yet most AS people that are studied are fairly young, so maybe it's fully explainable by the hypothesis that consequentialism is an ethical position held by immature people.

I see the differences between this post and the psychological unity of mankind one is akin to two ships passing in the night - not talking about the same thing. In general the arguments do not contradict each other.

I would like to make a few additions:

1) We cannot compare the speed of change in dogs (or pigeons) with that in wild populations. Mongrel and feral dogs are under selection in their normal environment and without control of their breeding therefore they resemble one another much, much more than do pure bred animals. The tame foxes if freed would return fairly quickly to being foxy. Humans on the other hand have continuously changed the environment in which they live (for say 50,000 years). Therefore the selective pressure is not static. So it is not surprising that new genes can arise and flow through populations. Dogs are not relevant here.

2) Genetics is more complex than algebra. In many cases there is an advantage to having two different alleles and both alleles in double dose are disadvantageous. Genes are duplicated (as a mutation) and then one allele can be conserved while another evolves under selective pressure. There are genes that control the use of groups of... (read more)

I don't understand. How are they not different? EY's post said that the time since leaving the ancestral environment is too short to allow significant divergence, including for psyche-related genes (except across genders). This book says that certain selective pressures do permit variation to happen much faster, and there is evidence that this effects the psyche. This contradicts the basis for EY's claiming that there can't have been much divergence. Also, regarding your 1), humans can go feral if they they go into the wild before significant assimilation into their birth culture.
I think all this talk is rather non-rigorous as of now. How much exactly is "much" divergence and how great part of the disagreements here is semantic ambiguity and smuggled-in meanings and how great part is different beliefs on matters of fact? I for one agree with JanetK that basically both the OP and Eliezer's notion of psychological unity hold water. SilasBarta, you think that OP contradicts psychic unity, so you have to mean a different thing by unity than what I mean. When I think about psychic unity I visualize a cosmopolitan scale that includes rocks, lizards and humans. The perceived degree of cross-cultural and individual differences should be weighed together by our being adapted to notice very fine differences between humans, I think (and that implies that one's assessment of divergence and unity is definitely not binary but is on a continuum). Also, as it has been said many times, current science on human genetic variance is muddled and politically charged. This situation will hopefully be improved with mass gene sequencing, but I think that as of now many of our beliefs (mine for sure) rely greatly on personal impressions and musings, especially when it's about variance's implications on moral philosophy.
Arguably while conditions have changed for humans massively in the recent past the same could be said of many domesticated animals or at least dogs (as a fun exercise check out how often the primary used and role of dogs has changed in say Anatolia over recorded history). And more importantly all this time we humans have basically been self-domesticating ourselves. So dogs are relevant.

Sounds neat! Thanks for going to the effort to summarize it!

However, now I'm worried. Your summary indicates that Cochran and Harpending are saying a lot of stuff that suggests a genetic basis for intelligence. How long until their careers are over?

I'm going to nitpick a couple points here.

"There is considerable psychological variance between dog breeds: in 1982-2006, there were 1,110 dog attacks in the US that were attributable to pit bull terriers, but only one attributable to Border collies"

Though pit bull terriers are indeed much more dangerous than collies, it may not be entirely behavioral genetics. Unlike collies, pits are often trained to be aggressive. Pits are also simply much stronger and more resistant to pain than than collies, so their attacks are more difficult to defend against, and thus more likely to cause injury, and thus more likely to be reported.

"A larger population means there's more genetic variance: mutations that had previously occurred every 10,000 years or so were now showing up every 400 years. "

True, but a larger population also means that "genetic sweeps" would take longer, especially given our relatively long life spans. If agricultural humans evolved more rapidly I'd say it was more likely due to new selection pressures that their hunter-gatherer ancestors didn't have.

Another point about the (IMO, dubious) "pit bulls are more dangerous" claim. It's possible that young/aggressive/defensive male humans more often purchase dog breeds that look aggressive (or have an aggressive reputation) and young/aggressive/defensive male humans more often mistreat their dogs, leaving them chained and untrained. Similarly, dog breeds that look aggressive (or have an aggressive reputation) may elicit different, more dangerous, patterns of behavior (fear, fear-based-defensiveness, et cetera) than "Lassie dogs".
But how did these dogs get the aggressive reputation in the first place?

And really, a stereotype leads to a 1110:1 ratio? Mighty powerful things, those stereotypes.

Yes, they are. See:
How did they get an aggressive reputation in the first place? Perhaps, by fighting other dogs publicly, with advertising for the fights focusing on their aggressiveness.
It only takes longer by a logarithmic factor, so overall, new genes are picked up at a higher rate.

The relevant uses on LW of the "psychological unity of humankind" concept were:

  1. As evidence of common human axiology, i.e. that there are few truly persistent moral disagreements once some kind of "idealization" like volition extrapolation is applied

  2. As the explanation for why it is hard for us to imagine non-human minds, since all human minds are so similar

As for (1), I think that it is refuted by an argument of Greene and Haidt: human moral architecture is universal in form, but its function is to absorb the local morality in youth, i.e. morality is universal in form but local in content.

As for (2), the cognitive differences that we do in fact see in people around the world are clearly not big enough to make a well-traveled person unsurprised by the concept of a paperclip maximizer.

Well put! We might want to come up with another name for (2). Humans are closer to each other in mindspace than they are to any alien mind, but it does not follow that, close up, all humans have the exact same psychology. There may be more than zoom-degree involved in the difference.

I checked the Wiki here:

"Let's say that you have a complex adaptation with six interdependent parts, and that each of the six genes is independently at ten percent frequency in the population. The chance of assembling a whole working adaptation is literally a million to one; and the average fitness of the genes is tiny, and they will not increase in frequency. "

Right - but look at the premise. Genes have linkage to other genes on the same chromosome - and so their frequencies may be far from independent. The existence of this possibility actually creates a selection pressure for interdependent genes that contribute to an adaptation to migrate towards each other on chromosomes - so they have more chance of being inherited together.

Two other factors: 1) Population sub structure matters. Suppose a population of one million is divided into mating bands with 30 individuals. Small bands tend to lose diversity so some bands would have some of the minor alleles at higher frequency. Now suppose band X has minor alleles A1, A2, and A3 at high frequency while band Y has minor alleles A4, A5, and A6 at high frequency. The two bands meet and party. The result is kids with all 6 minor alleles. Those kids have big fitness advantage and those minor allele frequencies are significantly boosted in those bands. The high local concentration of those alleles means even more kids with all 6 alleles are born, further increasing their frequency. (If individuals were equally likely to mate with anyone in the population then local concentrations would be diluted in one generation and there would be no effective selection. But individuals are far more likely to mate with related nearby bands, so high local concentrations of the minor alleles are maintained while the minor alleles slowly become the major alleles.) 2) Gene variants tend to have additive affects. Also most genes affect multiple traits simultaneously. So the all or nothing scenario given above would be rare. More likely you would have a diversity of environmental niches. In some of those niches the minor alleles would provide benefit due to one of their affected traits becoming more important. The frequencies of that minor allele would locally rise (while its frequency in the total population would remain low). E.g., a minor allele might provide protection against a specific pathogen. So there might be local environments where the probability of 6 minor alleles combining could be much higher than would occur in one large population in a uniform environment mating randomly.

This is a parent for comments about Q&A with the authors of "The 10,000 Year Explosion".

If you have a question for either Harpending or Cochran, please post the question as a response there. If you'd like to talk about the Q&A, this is the place to do it.

It occurred to me while I was arranging "infrastructure comments" for Q&A with Harpending and Cochran that my unilateral action to set this up might be seen by the community as inappropriate or poorly executed for various reasons. If you'd like to express displeasure with my efforts please downvote this comment and send me a PM by clicking on my name and then hitting the "Send message" button. If I receive private messages by this method I will attempt to summarize their contents and post the lessons learned in this same kibitzing area. Hopefully, before criticizing, you will first ask a question of the authors, and then provide feedback on the process by which this was set up :-)
Upvoted to express pleasure for your efforts, which were appropriate and well executed. :)
Moved this here from the Q&A thread. I don't think your example supports your claim. It certainly appears that much more complex adaptations have been successfully bred into dogs. Kaj gave the examples of pit bulls and border collies. I grew up with border collies as pets and it is quite easy to observe their selected herding behaviour compared to other breeds - they want to herd everything, complex behaviour that is not typically observed in other breeds. Casual observation strongly suggests much more complex adaptations than size in differing dog breeds and this appears to be supported by statistics like those Kaj cites. The superior sense of smell and tracking ability of bloodhounds is another example that springs to mind.
Your example of collie herding behavior is cool; I'm not sure what to make of that. Do wolves herd their pups? Or are there other plausible precedents? How complicated is collie "herding" behavior? As to smell and tracking ability in blood hounds: given that these same abilities occur in wolves (though to a lesser extent?), my guess would be that these adaptations are relatively simple to acquire, if you have a wolf's genome as your starting point. Designing smell for the first time would be complicated, but designing a better sense of smell from a wolf's sense of smell might just require sending more brain cells to the "process smells" brain center, or building more of the kinds of olfactory receptors dogs already have, or some other simple shift. (OTOH, if blood hounds are sensitive to many compounds that wolves aren't sensitive to, or if they exhibit many strategies in tracking that wolves don't exhibit, I'd be wrong and surprised. Let me know if that's so.)
IIRC, herding is explicitly mentioned in the book as a behavior that wolves do and which has been strengthened by selection in some dog races.
Right, wolves pack-hunt which involves pretty complex management of prey herds including something like a "theory of prey mind" to predict what the prey will do. There is a lot known about cape dog hunting because they are in fairly open country and can be observed. Not only do they predict where the prey herd will go, they coordinate and signal to each other with postures during the chase. It is absolutely beautiful to watch, like stop-action ballet. HCH
It can get quite complicated. That video has some post-production trickery but supposedly the majority of the herding is real. Sheep herding is sufficiently complicated that there was an English TV show devoted to it for many years called One Man and His Dog. I believe border collies dominate sheepdog trials but there are other herding breeds.
I think he was referring to instinctive herding behaiviors, not trained ones.
It's worth noting that size and shape differences are unusually easy to get from wolves, something to do with unusually flexible genes for skeletal morphology in the womb IIRC.

It seems to be a fairly trivial observation that all adult men and all women do not share the same underlying psychological machinery - because machinery malfunctions - during development, because of bad genes, and as a result of trama and other pathology - so there are quite a few people who are broken and have missing pieces.

There are, of course, also sex and age differences - if you consider all humans.

I argued against the premise of the "The Psychological Unity of Humankind" essay long ago here:

Indeed. The "psychic unity of mankind" is probably true to some degree on the level of different cultures, but far less so on the level of individuals. (I mean, we even have people who've had half their brain removed and are seemingly of normal intelligence, and that's not even going to the weird neuroscience cases.)

Great post!

The rise in Ashkenazi intelligence seems to be a combination of interbreeding and a history of being primarily in cognitively challenging occupations.

Or of having very high selection for being able to predict when your neighbors are going to try to kill you again. Or of being the only major group of people not having high selection pressure for skill at war for the past 2000 years, letting traits that give other advantages spread more rapidly.

(I don't think interbreeding can raise a population's intelligence. It can just keep it from dissipating.)

I think it can, e.g. if it takes an unusual open-mindedness for a local to marry into a despised and/or feared subgroup.

Kaj, regarding dogs: the selection pressure of selective breeding is abnormally high compared to the more stochastic effects of more natural selection pressure. Also, 15,000 years ~= 3000 dog generations, 3000 human generations ~= 60,000 years

"More specifically, they suggest that this could have been caused by interbreeding between "modern" humans and Neanderthals."

Probably bunk, IMO. An entertaining story, but lacking supporting evidence.

Actually, there is genetic evidence now. There are genes lacked by Africans that are shared by Neanderthals and non-Africans. Interbreeding seems the most likely explanation for this pattern.
I wasn't doubtful about interbreeding - that is all over the news. I was doubtful about interbreeding being the cause of the cultural explosion. Like I said, no evidence. In fact, contrary evidence, since Neanderthals were largely a European phenomenon.
Neanderthals were also in the Near East. You have a point about the cultural explosion, though. Africans don't seem to be less cultural than non-Africans, despite the fact that they don't seem to have any links to Neanderthals. It occurs to me that this lack of a link, after all this time, exemplifies how slow gene sweep is in a population as numerous, long-lived, and spread out as humanity.
Where on Earth have you been for the last couple of days? : ] Hiding in a Croatian cave? That being said, we currently have no reason to believe that this interbreeding had any phenotypic effects on the human lineage.
I am not aware of any evidence that "the "big bang" in cultural evolution that occured about 30,000 to 40,000 years ago" was caused by interbreeding with Neanderthals. That is probably bunk, IMHO. An entertaining story, but lacking in supporting evidence.

Superb review - the book has been lingering on my stack for a while, now I feel I've read 1/2 of it.

Damn, looks like an interesting book! Is it entertainingly written? (Looking for a belated mother's day gift.)

I'd say that, yeah - some of the subheadings are pop culture references, and there are occasional jokes scattered among the text. Like, when discussing the origin of the protolanguage that Indo-European languages descended from:

This is a really interesting subject with so many possible theses.

To state the obvious, we are all intellectually very different. And I don't think the difference between now and 40,000 years ago has to be all that significant. The less intelligent half of the population is fully human, obviously, but if everybody were at that level of intelligence, we would quite clearly still be making simple stone tools and living in caves. The difference between now and 40,000 years ago is therefore less than the natural variation in the population we see today. So I d... (read more)

If we're looking to find out if humans vary significantly in their psychological phenotypes, why not compare these phenotypes directly rather than appealing to highly shaky evolutionary speculations about genotypes?

(Sure, environmental variation also contributes to phenotypic variation, but we have no reason to believe that the current level of human psychological variation is masked by environmental factors - especially since right now environmental variation is probably at its peak in human history)

I'm sure the NIH would love to fund research comparing cognitive phenotypes of different races! Just remember to budget for nails and a cross in your proposal.

From Science, March 12 2010, p. 1316:

'Elsevier told Charlton [editor of a controversial Elsevier non-peer-reviewed journal that published AIDS denial articles] on 22 January that Medical Hypothesis would have to become a peer-reviewed journal. Potentially controversial papers should receive careful scrutiny, the publisher said, and some topics - including "hypotheses that could be interpreted as supporting racism" - should be off-limits.'


We already knew individuals vary in intelligence and personality, so those examples don't seem to me to provide any new evidence against psychological unity in the sense we've mostly been using it. Does the book give other examples of human psychological differences that might have arisen in the recent past?


Great post. However I would contend that psychological unity of mankind seems more like a minority belief on LW.

Eliezer's writings about FAI and CEV, and most discussion about them here, assume that the psychological unity of mankind is great enough that you can build one FAI that tries to optimize human experience WRT one value system, and this will be (in some sense that I don't understand) the "right thing to do".
For idealized human preference.
I don't see how that makes a difference WRT the required degree of psychological unity. Just talking about "idealized human preference" assumes either psychological unity, or moral realism.

It would have been entirely possible that the anatomically modern humans interbred with Neanderthals to some degree, the Neanderthals being a source of additional genetic variance that the modern humans could have benefited from.

Can someone square this hypothesis with my understanding that Africa is the mos genetically diverse place on Earth. It seems like an culture explosion has to have at least as much to do with selection pressure as genetic diversity or else the beneficial genes present in the various African clusters would have spread.

Non-Africans did interbreed with Neanderthals, as someone posted a few days ago. The recent article on the subject concluded that only Africans lack the Neanderthal genes. Most genes present in Africa didn't have a chance to spread outside of Africa, because the people carrying the genes didn't spread outside of Africa. (There's a bit of chicken-and-egg to that answer.) The Sahara is a major barrier today. I don't know if it existed 20,000 years ago. I don't know if it's known whether the diversity of genes in Africa developed historically, or recently. It's possible that the abundance of human parasites and diseases in Africa causes faster adaptation in response to them. The longer species X has been present in an area, or the higher the density of species X there, the better other species there are adapted to species X, the harder life is for species X, the faster species X evolves.
According to Wikipedia, the Sahara is about as dry now as it was 13,000 years ago, and has been around in some form or another for millions.

Speaking about "big bang" here, one could extend this parallel to a lesser known fact, that our human genetic diversity is increased every day, every hour. Our "genetic universe" appears to be expanding at an increasing rate. Possibly the cultural also, despite the globalization and global culture, everybody talks about.

What could also be another major point of this interesting book, you have found somewhere in the bookspace.

I request that anupriya28 be excluded from this web site. He/she/it is touting web sites by making fake postings all over the Internet. I guess this is the next thing in spam after bots: fake postings written by people in sweatshops somewhere in the Third World.

What are they paying you, anupriya28? Is pissing in everyone else's soup the only way you can feed yourself?

Thanks, but it's sufficient to click the "report" link. (Also more reliable, since I can't read all comments on LW, and could miss the one you've made, but I do read all reported comments now.)
I reported all of his comments, but I thought it worth giving public notice as well.
Maybe, though the way you phrased it doesn't sound optimal (as if you expected to convince the spammer, who is probably not a human), and there was little point in substantiating the claim that this is spam, an out-of-context external link to a trash commercial resource in spammer's comment was sufficient evidence.

I'm not a scientist, so I'm looking at the subjects here from a different angle. I've read the Harpending/Cochran book and I've read the book "Born on a Blue Day" by Daniel Tammet, on his Asperger's Syndrome. I wrote an article offering the notion that perhaps we are all somewhere on the autism disorder spectrum: Best wishes, Ron Pavellas

I'm not sure that statement is really meaningful in a nontrivial way. If we just consider autism, then there's some opposite end of that spectrum; of course everyone's somewhere on it, but you would expect most people to be at 0, or as near as makes no difference.

This seems a good time to point out that actually, there's now pretty good evidence that that spectrum does not end at neurotypicality, but continues past it to an actual "opposite" of autism - schizophrenia. Assuming this is correct, everyone is indeed somewhere on that spectrum!