"Individual organisms are best thought of as adaptation-executers rather than as fitness-maximizers."
    —John Tooby and Leda Cosmides, The Psychological Foundations of Culture.

    Fifty thousand years ago, the taste buds of Homo sapiens directed their bearers to the scarcest, most critical food resources—sugar and fat. Calories, in a word. Today, the context of a taste bud's function has changed, but the taste buds themselves have not. Calories, far from being scarce (in First World countries), are actively harmful. Micronutrients that were reliably abundant in leaves and nuts are absent from bread, but our taste buds don't complain. A scoop of ice cream is a superstimulus, containing more sugar, fat, and salt than anything in the ancestral environment.

    No human being with the deliberate goal of maximizing their alleles' inclusive genetic fitness, would ever eat a cookie unless they were starving. But individual organisms are best thought of as adaptation-executers, not fitness-maximizers.

    A toaster, though its designer intended it to make toast, does not bear within it the intelligence of the designer—it won't automatically redesign and reshape itself if you try to cram in an entire loaf of bread. A Phillips-head screwdriver won't reconform itself to a flat-head screw. We created these tools, but they exist independently of us, and they continue independently of us.

    The atoms of a screwdriver don't have tiny little XML tags inside describing their "objective" purpose. The designer had something in mind, yes, but that's not the same as what happens in the real world. If you forgot that the designer is a separate entity from the designed thing, you might think, "The purpose of the screwdriver is to drive screws"—as though this were an explicit property of the screwdriver itself, rather than a property of the designer's state of mind. You might be surprised that the screwdriver didn't reconfigure itself to the flat-head screw, since, after all, the screwdriver's purpose is to turn screws.

    The cause of the screwdriver's existence is the designer's mind, which imagined an imaginary screw, and imagined an imaginary handle turning. The actual operation of the screwdriver, its actual fit to an actual screw head, cannot be the objective cause of the screwdriver's existence: The future cannot cause the past. But the designer's brain, as an actually existent thing within the past, can indeed be the cause of the screwdriver.

    The consequence of the screwdriver's existence, may not correspond to the imaginary consequences in the designer's mind. The screwdriver blade could slip and cut the user's hand.

    And the meaning of the screwdriver—why, that's something that exists in the mind of a user, not in tiny little labels on screwdriver atoms. The designer may intend it to turn screws. A murderer may buy it to use as a weapon. And then accidentally drop it, to be picked up by a child, who uses it as a chisel.

    So the screwdriver's cause, and its shape, and its consequence, and its various meanings, are all different things; and only one of these things is found within the screwdriver itself.

    Where do taste buds come from? Not from an intelligent designer visualizing their consequences, but from a frozen history of ancestry: Adam liked sugar and ate an apple and reproduced, Barbara liked sugar and ate an apple and reproduced, Charlie liked sugar and ate an apple and reproduced, and 2763 generations later, the allele became fixed in the population. For convenience of thought, we sometimes compress this giant history and say: "Evolution did it." But it's not a quick, local event like a human designer visualizing a screwdriver. This is the objective cause of a taste bud.

    What is the objective shape of a taste bud? Technically, it's a molecular sensor connected to reinforcement circuitry. This adds another level of indirection, because the taste bud isn't directly acquiring food. It's influencing the organism's mind, making the organism want to eat foods that are similar to the food just eaten.

    What is the objective consequence of a taste bud? In a modern First World human, it plays out in multiple chains of causality: from the desire to eat more chocolate, to the plan to eat more chocolate, to eating chocolate, to getting fat, to getting fewer dates, to reproducing less successfully. This consequence is directly opposite the key regularity in the long chain of ancestral successes which caused the taste bud's shape. But, since overeating has only recently become a problem, no significant evolution (compressed regularity of ancestry) has further influenced the taste bud's shape.

    What is the meaning of eating chocolate? That's between you and your moral philosophy. Personally, I think chocolate tastes good, but I wish it were less harmful; acceptable solutions would include redesigning the chocolate or redesigning my biochemistry.

    Smushing several of the concepts together, you could sort-of-say, "Modern humans do today what would have propagated our genes in a hunter-gatherer society, whether or not it helps our genes in a modern society." But this still isn't quite right, because we're not actually asking ourselves which behaviors would maximize our ancestors' inclusive fitness. And many of our activities today have no ancestral analogue. In the hunter-gatherer society there wasn't any such thing as chocolate.

    So it's better to view our taste buds as an adaptation fitted to ancestral conditions that included near-starvation and apples and roast rabbit, which modern humans execute in a new context that includes cheap chocolate and constant bombardment by advertisements.

    Therefore it is said: Individual organisms are best thought of as adaptation-executers, not fitness-maximizers.

    New Comment
    33 comments, sorted by Click to highlight new comments since: Today at 5:05 AM

    Would this also explain why the use of birth control is so popular?

    The atoms of a screwdriver don't have tiny little XML tags inside describing their "objective" purpose.

    Not yet, but those atoms probably will be tagged in XML with the designer's intent fairly soon. Also the user manual, credits, bill of materials and sourcing, recycling instructions, links to users groups and issue repositories, etc., etc. It obviously doesn't change your argument, but I do wonder how our cognitive biases will be affected when everything is tagged with intent and history, crosslinked and searchable. I guess we'll find out soon enough.

    A long time ago I read a newspaper article which claimed that a Harvard psychological research project had women chew up chocolate and spit it out, while looking in a mirror and connected to some sort of electrodes. They claimed that after that the women didn't like chocolate much.

    I tried it without the electrodes. I got a 2 pound bag of M&Ms. I usually didn't buy M&Ms because no matter how many I got they'd be gone in a couple of days. I started chewing them and spitting them out. Every now and then I'd rinse out my mouth with water and the flavor would be much more intense after that. I got all the wonderful taste of the M&Ms but I didn't swallow.

    I did that for 15 minutes a day for 3 days. After that I didn't much like chocolate, and it took more than a year before I gradually started eating it again.

    I think the esthetic pleasure of chocolate must have a strong digestive component.

    Most of our taste buds are actually in the part of the tongue that food only reaches after swallowing.

    I'd hazard a guess that this is also where most of the positive reinforcement circuitry eventually happens, but that might be inferring too much based on what I know. I wish I had a psychoanatomy textbook handy. It might also be that the negative reinforcement circuitry happens mostly on the pre-swallow taste buds, which would handily explain your temporary aversion to chocolate -and- the "taste test" phenomenon wherein humans taste something once and, prior to swallowing, proclaim a permanent dislike of that flavor.

    A caution: anyone who reads this comment should not take either J_Thomas's hypothesis or mine as actual evidence. I provided one to illustrate just how reasonable the exact opposite of what he said sounded, i.e., that nothing about digestion provides reinforcement.

    I think the esthetic pleasure of chocolate must have a strong digestive component.

    Seth Roberts' diet was really about this insight.

    https://en.wikipedia.org/wiki/The_Shangri-La_Diet

    Another possibility is that there's something about chewing things and spitting them out that tends to make them less appealing. (E.g., the whole thing looks and feels kinda gross; or you associate spitting things out with finding them unpleasant -- normally if you spit something out after starting to eat it it's because it tastes unpleasant or contains unpleasant gristle or something like that.)

    Who can guarantee that chocolate won’t become a super food in near future?

    So redesign the human taste system to measure how much of each nutrient you have and how much you need, including micronutrients formerly reliably common in the ancestral environment, and macronutrients formerly reliably scarce. Then it will function fine even after civilization collapses. Evolutions are stupid.

    I think the esthetic pleasure of chocolate must have a strong digestive component.

    Seth Roberts would agree with you. I don't think he's written about that particular experiment, but it confirms his basic argument on flavor-calorie association.

    The atoms of a screwdriver don't have tiny little XML tags inside describing their "objective" purpose. The designer had something in mind, yes, but that's not the same as what happens in the real world. If you forgot that the designer is a separate entity from the designed thing, you might think, "The purpose of the screwdriver is to drive screws" - as though this were an explicit property of the screwdriver itself, rather than a property of the designer's state of mind. You might be surprised that the screwdriver didn't reconfigure itself to the flat-head screw, since, after all, the screwdriver's purpose is to turn screws.

    This is the distinction Daniel Dennett makes between the intentional stance and the design stance. I consider it a useful one. He also distinguishes the physical stance, which you touch on.

    It turns out that much chocolate is produced with exploited child slave labor (also here, a more business-friendly article). That is a new meaning of eating chocolate, very sad for me since I love the stuff. I'm trying to transition to fair trade products.

    Re: "Individual organisms are best thought of as adaptation-executers rather than as fitness-maximizers".

    It's a bit like saying deep blue is an instruction executor, not an expected chess position utility maximizer.

    The statement muddles up the "why" and "how" levels of explanation.

    Executing instructions are how chess programs go about maximizing expected chess position utility.

    Of course organisms cannot necessarily maximise their fitnesses - rather they attempt to maximise their expected fitness, just like other expected utility maximisers.

    Tooby and Cosmides go on to argue the even more confused thesis:

    "[Goals such as "maximize your fitness" or "have as many offspring as possible"] are probably impossible to instantiate in any computational system."

    Re: "Individual organisms are best thought of as adaptation-executers rather than as fitness-maximizers".

    It's a bit like saying deep blue is an instruction executor, not an expected chess position utility maximizer.

    Not really. Deep Blue's programming is so directly tied to winning chess, maximizing the value of its position is definitely what it "intends". It actually "thinks about" how well it's doing in this regard.

    Living things, on the other hand, are far from explicit fitness maximizers. Evolution has given them behaviours that, in most natural circumstances, are fairly good at helping their genes. But in unusual circumstances they may well do things that are totally useless.

    Humans today, for example, totally fail to maximize their fitness, e.g. by choosing to have just a small family and using contraception. We're in an unusual situation - evolution knew nothing about condoms.

    Re: Living things, on the other hand, are far from explicit fitness maximizers

    Thus the point about organisms maximising their expected fitness. Organisms really do maximise their expected fitness - just like all other expected fitness maximisers. It's just that their expectations may not be a good match for reality.

    That is true even of Deep Blue. Its chess simulation is not the same as the real world of chess. It is living in the environment it was "designed" for - but it is resource-limited, and its program is sub-optimal. So its expectations too may be wrong. It can still lose.

    As far as I can tell, the idea that organisms maximising their actual fitnesses is a ridiculous straw man erected by Tooby and Cosmides for nefarious rhetorical purposes of their own. Nobody ever actually thought that.

    What about the idea that organisms are maximising something different - say expected happiness - rather than expected fitness, and these days the two can be divorced - e.g. by drugs? Again, much the same is equally true of Deep Blue - all expected fitness maximisers represent their expected fitness internally by some representation of it, and then maximise that representation.

    Organisms really are well thought of as maximising their expected fitness - under the limited resource constraints. They are, after all the product of a gigantic optimisation process whose utility function favours effective expected fitness maximisers. It's just that sometimes the expectations of the organisms are not a good match for reality.

    Re: condoms - barrier contraceptives do not necessarily reduce inclusive fitness. They allow people to have sex who would not normally risk doing so. They allow families to compete better in more K-selected environments, by helping them to devote their resouces to a smaller number of higher quality offspring. Of course they can also be used to sabotage your genetic program, but that is not their only use.

    Thus the point about organisms maximising their expected fitness. Organisms really do maximise their expected fitness - just like all other expected fitness maximisers. It's just that their expectations may not be a good match for reality.

    What do the words "expected" and "expectations" mean in this context?

    "Expected fitness" isn't a term I'm familiar with. But we're talking about organisms that are either not conscious, or are not consciously thinking about fitness. It can't mean "expected" in the normal sense, and so I need an explanation.

    Deep Blue is not conscious either - yet it still predicts possible future chess positons, and makes moves based on its expectation of their future payoff.

    Take the term as a behaviourist would. Organisms have sensors, actuators, and processing that mediates between the two. If they behave in roughly the same way as an expected fitness maximiser would if given their inputs, then the name fits.

    Deep Blue is not conscious either - yet it still predicts possible future chess positons, and makes moves based on its expectation of their future payoff.

    Yes indeed, which is why I think it's much easier to consider it a utility maximiser than organisms are. It explicitly "thinks about" the value of its position and tries to improve it. Organisms don't. They just carry out whatever adaptations evolution has given them.

    Take the term [expected fitness maximiser] as a behaviourist would. Organisms have sensors, actuators, and processing that mediates between the two. If they behave in roughly the same way as an expected fitness maximiser would if given their inputs, then the name fits.

    But I don't know how a behaviourist would take it. It's not a term I'm familiar with.

    From looking through Google hits, it seems that "expected fitness" is analogous to the "expected value" of a bet, and means "fitness averaged across possible futures" - but organisms don't maximise that, because they often find themselves in situations where their strategies are sub-optimal. They often make bad bets.

    (Deep Blue isn't a perfect utility maximiser either, of course, since it can't look far enough ahead. Only a perfect player would be a true maximiser.)

    The concept of "expected fitness" is often used by biologists to counter the claim that "survival of the fittest" is a tautology. There, the expectation is by the biologist, who looking at the organism, attempts to predict its fitness in some specified environment.

    An expected fitness maximiser is just an expected utility maximiser, where the utility function is God's utility function.

    If you put such an entity in an unfamiliar environment - so that it doesn't work very well - it doesn't normally stop being an expected utility maximiser. If it still works at all, it probably still tries to choose actions that maximise its expected utility. It's just that its expectations may not necessarily be a good match for reality.

    Considering organisms as maximising their expected fitness is the central mode of explanation in evolutionary biology. Most organisms really do behave as though they are trying to have as many descendants as possible, given their limitations and the information they have available to them. That the means by which they do this involves something akin to executing instructions does not detract in any way from this basic point - nor is it refuted by the placing of organisms in unfamiliar environments, where their genetic program does not have the desired effect.

    I am not clear about your claim that Deep Blue thinks, but organisms do not. Are you ignoring animals? Animals have brains which think - often a fair bit more sophisticated than the thoughts Deep Blue thinks.

    An expected fitness maximiser is just an expected utility maximiser, where the utility function is God's utility function.

    I searched Google for "expected utility maximiser" and the 6th hit was your own website:

    An expected utility maximiser is a theoretical agent who considers its actions, computes their consequences and then rates them according to a utility function.

    The typical organism just doesn't do this. I think you'd have a hard time arguing that even a higher mammal does this.

    I am not clear about your claim that Deep Blue thinks, but organisms do not. Are you ignoring animals?

    I didn't say organisms don't think. I said they don't think about their fitness. They think about things like surviving, eating, finding mates, and so on, all of which usually contribute to reproduction in a natural environment.

    The proof of this really is the way that a great many humans have indeed rebelled against their genes, and knowingly choose not to maximise their fitness. Dawkins, for example, has only one child. As a high-status male, he could presumably have had many more.

    Hmm. If your intention is to stress that, in many cases, organisms behave as if they were fitness maximisers, then yes, I see your point. But it's important to bear in mind that there are other cases where they don't behave "correctly" - because they're executing sub-optimal adaptations.

    "Organisms really do maximise their expected fitness - just like all other expected fitness maximisers. It's just that their expectations may not be a good match for reality."

    Tim, I hate to be rude, but I think this is just silly. There are a nontrival number of people who deliberately refrain from having children. To the extent that your theory can explain them, it can explain anything.

    If you're careful about how you define utility, you can probably "explain" any actions with expected utility theory. It's trivial; it's an abuse of the formalism; it's arguing by definition.

    Re: An expected utility maximiser is a theoretical agent who considers its actions, computes their consequences and then rates them according to a utility function ... I think you'd have a hard time arguing that even a higher mammal does this.

    Real organisms are imperfect approximations to expected utility maximisers - but they really do act rather a lot like this. For example see the work of Jeff Hawkins on the role of prediction in brain function.

    There's relevant work by von Neumann and Morgenstern that suggests that all economic actors can be modelled as rational economic agents maximising some utility function - regardless of the details of their internal operation - with the caveat that any deviations from this model results in agents which are vulnerable to burning up their resources for no personal benefit under some circumstances - and in the case of evolution, it is likely that such vulnerabilities would either crop up rarely, or be selected against.

    Of course organisms without brains have relatively little look-ahead. They are limited to computations that can be produced by their cells - which are still sophisticated computation devices, but not really on the same scale as a whole brain. The "expectations" of plants are mostly that the world is much the same as the one its ancestors experienced.

    Re: organisms executing "unsuitable" adaptations...

    It can certainly happen. But brains exist partly to help adapt to the effects of environmental fluctuations - and prevent unfamiliar environments from breaking the genetic program. Of course some organisms will still fail. Indeed, most male organisms will fail - even with an environment that is the expected one. That's just how nature operates.

    @Z. M. Davis:

    As I have said, the idea that organisms typically act to maximise their inclusive fitness - to the best of their understanding and ability - is a central explanatory principle in evolutionary biology.

    That some organisms fail to maximise their actual fitness - due to mutations, due to being in an unfamiliar environment, due to resource limitations, or due to bad luck is not relevant evidence against this idea.

    The Tooby and Cosmides dichotomy between Adaptation-Executers and Fitness-Maximizers that this blog post is about is a mostly a false one - based on muddling up "how" and "why" levels of explanation. Maximising their expected fitness is why organisms behave as they do. Executing adaptations is how they do it. These different types of explanations are complimentary, and are not mutually-exclusive.

    Right, it's not a dichotomy--the two explanations aren't mutually exclusive. But it's still an extremely relevant distinction--at least for those of us who are interested in the organisms themselves, rather than solely in the unconscious, abstract optimization process that created them.

    Sure, I get the point. Humans are products of natural selection, so anything any human does can be seen as the result of selection pressures favoring behaviors that resulted in increased fitness in the EEA. There is some sense of the words in which you could look at someone who is, say, committing suicide (before having reproduced), and say: "What she's really doing here is attempting to maximize her expected inclusive fitness!"

    It's not wrong so much as it is silly. The point of the post is that the organisms themselves don't actually care about fitness. You can give a fitness-based account of why the organisms want what they actually do want. But so what? When we're not talking about evolutionary biology, why should we care? You might as well say (I'm inspired here by a Daniel Dennett quote which I can't locate at the moment) that no organism really maximizes expected fitness; they actually just follow the laws of physics. Well ... okay, sure, but it's silly to say so. You have to use the right level of explanation for the right situation.

    ADDENDUM: Maybe this phrasing will help:

    To say that an organism is "trying to maximize expected fitness," applies in a broad sense to all evolved creatures, and as such is compatible with anything that any evolved creature does, including obviously fitness-reducing acts. In this broad sense, the "trying to maximize expected fitness" theory does a poor job of constraining anticipations compared to the theory that makes reference to the actual explicitly-represented goals of the organism in question. If we interpret "trying to maximize expected fitness" in a narrower sense in which organisms explicitly try to gain fitness, then it is obviously false (see, e.g., teenage suicides, women who have abortions when they could put the baby up for adoption, &c., &c.).

    Re: The point of the post is that the organisms themselves don't actually care about fitness

    Most of them certainly act as though they do. Kick someone in the testicles, steal their girlfriend, threaten their son, or have sex with their wife and observe the results.

    Of course people don't always profess to caring about their own fitness. Rather many profess to be altruists. That is an expected result of wishing to appear altruistic - rather than selfish - to others. Indeed, people are often good at detecting liars and are poor at deception - and the best way of appearing to be an altruist is to believe it yourself, and then use doublethink to rationalise away any selfish misdeads. So don't expect to be able to access your actual motives through introspection. Consciousness is part of the brain's PR department - not a hotline to its motive system.

    Re: teenage suicides

    Adaptive explanations were never intended to cover all cases. Organisms suffer from brain damage, developmental defects, cancer, infectious diseases, misconceptions, malnutrition, and all manner of other problems that prevent them from having as many grandchildren as they otherwise might. However, these deviations from the rule do not indicate that adaptive explanations are vacuous, or that they are compatible with any outcome.

    "Most of them certainly act as though they do. Kick someone in the testicles [...]"

    Getting kicked in the testicles hurts. The explanation for why it hurts invokes selection pressures, but if you already know that it hurts, any general principles of evolutionary biology are screened off and irrelevant to explaining the organism's behavior. Likewise the other things.

    "Of course people don't always profess to caring about their own fitness. Rather many profess to be altruists."

    This is a non-sequitur. Psychological selfishness is a distinct concept from the metaphorical genetic "selfishness" of, e.g., selfish genes. Someone who spends a lot of time caring for her sick child may be behaving in a way that is psychologically altruistic, but genetically "selfish." Likewise, someone who refrains from having children because raising children is a burden may be psychologically selfish, but genetically "altruistic."

    "So don't expect to be able to access your actual motives through introspection."

    These "actual motives" are epiphenominal. We can say that sugar tastes good, and bodily damage feels bad, and self-deception is easy, &c., and that there are evolutionary explanations for all of these things, without positing any mysterious, unobservable secret motives.

    Although at this point I suspect we are just talking past each other ...

    The atoms of a screwdriver don't have tiny little XML tags inside describing their "objective" purpose. The designer had something in mind, yes, but that's not the same as what happens in the real world. If you forgot that the designer is a separate entity from the designed thing, you might think, "The purpose of the screwdriver is to drive screws" - as though this were an explicit property of the screwdriver itself, rather than a property of the designer's state of mind. You might be surprised that the screwdriver didn't reconfigure itself to the flat-head screw, since, after all, the screwdriver's purpose is to turn screws.

    After someone points this out, the incorrect response is to start adding clauses:

    The screwdriver's purpose is to turn Phillips-head screws.

    Or:

    The screwdriver's purpose is to turn screws designed to be turned by the screwdriver.

    People are more likely to do this to something other than screwdrivers, obviously.

    "The purpose of love is..."
    "Eyebrows are there so that..."

    It is easy to misinterpret the point of this post as claiming that the purpose assigned to an object is wrong or inadequate or hopelessly complex. That isn't what is being said.

    No human being with the deliberate goal of maximizing their alleles' inclusive genetic fitness, would ever eat a cookie unless they were starving.

    That statement sounds a little bit too strong to me. :-)

    While we are, in the end, meat machines, we are adaptive meat machines, and one of the major advantages of intelligence is the ability to adapt to your environment - which is to say, doing more than executing preexisting adaptations but being able to generate new ones on the fly.

    So while adaptation-execution is important, the very fact that we are capable of resisting adaptation-execution means that we are more than adaptation-executors. Indeed, most higher animals are capable of learning, and many are capable of at least basic problem solving.

    There is pretty significant selective pressure towards being a fitness maximizer and not a mere adaptation-executor, because something which actively maximizes its fitness will by definition have higher fitness than one which does not.

    So it's better to view our taste buds as an adaptation fitted to ancestral conditions that included near-starvation and apples and roast rabbit,

    And those apples were crab apples. I doubt that many of our distant ancestors would have experienced anything like our bred-for-sweetness fruit varieties on a regular basis. Those new fruit varieties are probably still very healthy – I'm just further highlighting the enormous gulf between what our ancestors ate and the concentrated sugar-fat-salt concoctions that we eat.

    A link to Tooby and Cosmides' pape cited in the intro; http://www.cep.ucsb.edu/papers/pfc92.pdf (Very long, but enlightening.)

    I misread "organisms" as "organizations".

    And I feel like it actually does still apply somewhat, in the sense that the ideas passed down from team to team is the actual "DNA" whereas the behavior of the organization are determined by that but don't direct feed back to that.