Followup toAnthropomorphic Optimism

Aristotle distinguished between four senses of the Greek word aition, which in English is translated as "cause", though Wikipedia suggests that a better translation is "maker".  Aristotle's theory of the Four Causes, then, might be better translated as the Four Makers.  These were his four senses of aitia:  The material aition, the formal aition, the efficient aition, and the final aition.

The material aition of a bronze statue is the substance it is made from, bronze.  The formal aition is the substance's form, its statue-shaped-ness.  The efficient aition best translates as the English word "cause"; we would think of the artisan carving the statue, though Aristotle referred to the art of bronze-casting the statue, and regarded the individual artisan as a mere instantiation.

The final aition was the goal, or telos, or purpose of the statue, that for the sake of which the statue exists.

Though Aristotle considered knowledge of all four aitia as necessary, he regarded knowledge of the telos as the knowledge of highest order.  In this, Aristotle followed in the path of Plato, who had earlier written:

Imagine not being able to distinguish the real cause from that without which the cause would not be able to act as a cause.  It is what the majority appear to do, like people groping in the dark; they call it a cause, thus giving it a name that does not belong to it.  That is why one man surrounds the earth with a vortex to make the heavens keep it in place, another makes the air support it like a wide lid.  As for their capacity of being in the best place they could possibly be put, this they do not look for, nor do they believe it to have any divine force...

Suppose that you translate "final aition" as "final cause", and assert directly:

"Why do human teeth develop with such regularity, into a structure well-formed for biting and chewing?  You could try to explain this as an incidental fact, but think of how unlikely that would be.  Clearly, the final cause of teeth is the act of biting and chewing.  Teeth develop with regularity, because of the act of biting and chewing - the latter causes the former."

A modern-day sophisticated Bayesian will at once remark, "This requires me to draw a circular causal diagram with an arrow going from the future to the past."

It's not clear to me to what extent Aristotle appreciated this point - that you could not draw causal arrows from the future to the past.  Aristotle did acknowledge that teeth also needed an efficient cause to develop.  But Aristotle may have believed that the efficient cause could not act without the telos, or was directed by the telos, in which case we again have a reversed direction of causality, a dependency of the past on the future.  I am no scholar of the classics, so it may be only myself who is ignorant of what Aristotle believed on this score.

So the first way in which teleological reasoning may be an outright fallacy, is when an arrow is drawn directly from the future to the past.  In every case where a present event seems to happen for the sake of a future end, that future end must be materially represented in the past.

Suppose you're driving to the supermarket, and you say that each right turn and left turn happens for the sake of the future event of your being at the supermarket.  Then the actual efficient cause of the turn, consists of:  the representation in your mind of the event of yourself arriving at the supermarket; your mental representation of the street map (not the streets themselves); your brain's planning mechanism that searches for a plan that represents arrival at the supermarket; and the nerves that translate this plan into the motor action of your hands turning the steering wheel.

All these things exist in the past or present; no arrow is drawn from the future to the past.

In biology, similarly, we explain the regular formation of teeth, not by letting it be caused directly by the future act of chewing, but by using the theory of natural selection to relate past events of chewing to the organism's current genetic makeup, which physically controls the formation of the teeth.  Thus, we account for the current regularity of the teeth by referring only to past and present events, never to future events.  Such evolutionary reasoning is called "teleonomy", in contrast with teleology.

We can see that the efficient cause is primary, not the final cause, by considering what happens when the two come into conflict.  The efficient cause of human taste buds is natural selection on past human eating habits; the final cause of human taste buds is acquiring nutrition.  From the efficient cause, we should expect human taste buds to seek out resources that were scarce in the ancestral environment, like fat and sugar.  From the final cause, we would expect human taste buds to seek out resources scarce in the current environment, like vitamins and fiber.  From the sales numbers on candy bars, we can see which wins.  The saying "Individual organisms are best thought of as adaptation-executers rather than as fitness-maximizers" asserts the primacy of teleonomy over teleology.

Similarly, if you have a mistake in your mind about where the supermarket lies, the final event of your arrival at the supermarket, will not reach backward in time to steer your car.  If I know your exact state of mind, I will be able to predict your car's trajectory by modeling your current state of mind, not by supposing that the car is attracted to some particular final destination.  If I know your mind in detail, I can even predict your mistakes, regardless of what you think is your goal.

The efficient cause has screened off the telos:  If I can model the complete mechanisms at work in the present, I never have to take into account the future in predicting the next time step.

So that is the first fallacy of teleology - to make the future a literal cause of the past.

Now admittedly, it may be convenient to engage in reasoning that would be fallacious if interpreted literally.  For example:

I don't know the exact state of Mary's every neuron.  But I know that she desires to be at the supermarket.  If Mary turns left at the next intersection, she will then be at the supermarket (at time t=1).  Therefore Mary will turn left (at time t=0).

But this is only convenient shortcut, to let the future affect Mary's present actions.  More rigorous reasoning would say:

My model predicts that if Mary turns left she will arrive at the supermarket.  I don't know her every neuron, but I believe Mary has a model similar to mine.  I believe Mary desires to be at the supermarket.  I believe that Mary has a planning mechanism similar to mine, which leads her to take actions that her model predicts will lead to the fulfillment of her desires.  Therefore I predict that Mary will turn left.

No direct mention of the actual future has been made.  I predict Mary by imagining myself to have her goals, then putting myself and my planning mechanisms into her shoes, letting my brain do planning-work that is similar to the planning-work I expect Mary to do.  This requires me to talk only about Mary's goal, our models (presumed similar) and our planning mechanisms (presumed similar) - all forces active in the present.

And the benefit of this more rigorous reasoning, is that if Mary is mistaken about the supermarket's location, then I do not have to suppose that the future event of her arrival reaches back and steers her correctly anyway.

Teleological reasoning is anthropomorphic - it uses your own brain as a black box to predict external events.  Specifically, teleology uses your brain's planning mechanism as a black box to predict a chain of future events, by planning backward from a distant outcome.

Now we are talking about a highly generalized form of anthropomorphism - and indeed, it is precisely to introduce this generalization that I am talking about teleology!  You know what it's like to feel purposeful.  But when someone says, "water runs downhill so that it will be at the bottom", you don't necessarily imagine little sentient rivulets alive with quiet determination.  Nonetheless, when you ask, "How could the water get to the bottom of the hill?" and plot out a course down the hillside, you're recruiting your own brain's planning mechanisms to do it.  That's what the brain's planner does, after all: it finds a path to a specified destination starting from the present.

And if you expect the water to avoid local maxima so it can get all the way to the bottom of the hill - to avoid being trapped in small puddles far above the ground - then your anthropomorphism is going to produce the wrong prediction.  (This is how a lot of mistaken evolutionary reasoning gets done, since evolution has no foresight, and only takes the next greedy local step.)

But consider the subtlety: you may have produced a wrong, anthropomorphic prediction of the water without ever thinking of it as a person - without ever visualizing it as having feelings - without even thinking "the water has purpose" or "the water wants to be at the bottom of the hill" - but only saying, as Aristotle did, "the water's telos is to be closer to the center of the Earth".  Or maybe just, "the water runs downhill so that it will be at the bottom".  (Or, "I expect that human taste buds will take into account how much of each nutrient the body needs, and so reject fat and sugar if there are enough calories present, since evolution produced taste buds in order to acquire nutrients.")

You don't notice instinctively when you're using an aspect of your brain as a black box to predict outside events.  Consequentialism just seems like an ordinary property of the world, something even rocks could do.

It takes a deliberate act of reductionism to say:  "But the water has no brain; how can it predict ahead to see itself being trapped in a local puddle, when the future cannot directly affect the past?  How indeed can anything at all happen in the water so that it will, in the future, be at the bottom?  No; I should try to understand the water's behavior using only local causes, found in the immediate past."

It takes a deliberate act of reductionism to identify telos as purpose, and purpose as a mental property which is too complicated to be ontologically fundamental.  You don't realize, when you ask "What does this telos-imbued object do next?", that your brain is answering by calling on its own complicated planning mechanisms, that search multiple paths and do means-end reasoning.  Purpose just seems like a simple and basic property; the complexity of your brain that produces the predictions is hidden from you.  It is an act of reductionism to see purpose as requiring a complicated AI algorithm that needs a complicated material embodiment.

So this is the second fallacy of teleology - to attribute goal-directed behavior to things that are not goal-directed, perhaps without even thinking of the things as alive and spirit-inhabited, but only thinking, X happens in order to Y.  "In order to" is mentalistic language, even though it doesn't seem to name a blatantly mental property like "fearful" or "thinks it can fly".

Remember the sequence on free will?  The problem, it turned out, was that "could" was a mentalistic property - generated by the planner in the course of labeling states as reachable from the start state.  It seemed like "could" was a physical, ontological property.  When you say "could" it doesn't sound like you're talking about states of mind.  Nonetheless, the mysterious behavior of could-ness turned out to be understandable only by looking at the brain's planning mechanisms.

Since mentalistic reasoning uses your own mind as a black box to generate its predictions, it very commonly generates wrong questions and mysterious answers.

If you want to accomplish anything related to philosophy, or anything related to Artificial Intelligence, it is necessary to learn to identify mentalistic language and root it all out - which can only be done by analyzing innocent-seeming words like "could" or "in order to" into the complex cognitive algorithms that are their true identities.

(If anyone accuses me of "extreme reductionism" for saying this, let me ask how likely it is that we live in an only partially reductionist universe.)

The third fallacy of teleology is to commit the Mind Projection Fallacy with respect to telos, supposing it to be an inherent property of an object or system.  Indeed, one does this every time one speaks of the purpose  of an event, rather than speaking of some particular agent desiring the consequences of that event.

I suspect this is why people have trouble understanding evolutionary psychology - in particular, why they suppose that all human acts are unconsciously directed toward reproduction.  "Mothers who loved their children outreproduced those who left their children to the wolves" becomes "natural selection produced motherly love in order to ensure the survival of the species" becomes "the purpose of acts of motherly love is to increase the mother's fitness".  Well, if a mother apparently drags her child off the train tracks because she loves the child, that's also the purpose of the act, right?  So by a fallacy of compression - a mental model that has one bucket where two buckets are needed - the purpose must be one or the other: either love or reproductive fitness.

Similarly with those who hear of evolutionary psychology and conclude that the meaning of life is to increase reproductive fitness - hasn't science demonstrated that this is the purpose of all biological organisms, after all?

Likewise with that fellow who concluded that the purpose of the universe is to increase entropy - the universe does so consistently, therefore it must want to do so - and that this must therefore be the meaning of life.  Pretty sad purpose, I'd say!  But of course the speaker did not seem to realize what it means to want to increase entropy as much as possible - what this goal really implies, that you should go around collapsing stars to black holes.  Instead the one focused on a few selected activities that increase entropy, like thinking.  You couldn't ask for a clearer illustration of a fake utility function.

I call this a "teleological capture" - where someone comes to believe that the telos of X is Y, relative to some agent, or optimization process, or maybe just statistical tendency, from which it follows that any human or other agent who does X must have a purpose of Y in mind.  The evolutionary reason for motherly love becomes its telos, and seems to "capture" the apparent motives of human mothers.  The game-theoretical reason for cooperating on the Iterated Prisoner's Dilemma becomes the telos of cooperation, and seems to "capture" the apparent motives of human altruists, who are thus revealed as being selfish after all.  Charity increases status, which people are known to desire; therefore status is the telos of charity, and "captures" all claims to kinder motives.  Etc. etc. through half of all amateur philosophical reasoning about the meaning of life.

These then are three fallacies of teleology:  Backward causality, anthropomorphism, and teleological capture.

New Comment
14 comments, sorted by Click to highlight new comments since: Today at 3:28 PM

Re: how likely it is that we live in an only partially reductionist universe?

Reductionism is a term which has been debased. It ought to still mean what it did in Hofstadter's time - in which case such a question would make no sense.

AFAICS, the modern corruption is due to the spiritual physicist John Polkinghorne, who deserves ignoring.

If I know your exact state of mind, I will be able to predict your car's trajectory by modeling your current state of mind

This is tangential to the direction of the post, but in fact you will not be able to predict the car's trajectory from the driver's current state of mind, since it depends not only on that state of mind, but also on everything that might happen on the way. Maybe a road is blocked and the driver goes another way. Maybe the driver has a crash and must abandon the journey. You will certainly not be able to predict the detailed movements the driver makes with the car's controls, since those will depend even on transient gusts of wind. Purposes are not achieved merely by making a plan, and then executing it.

Rational choice theory is probably the closest analogue to teleological thinking in modern academic research. Regarding all such reasoning as fallacious seems to be an extreme position; to what extent do they regard the "three fallacies" of teleology as genuine fallacies of reasoning as opposed to useful heuristics?

The third fallacy of teleology is to commit the Mind Projection Fallacy with respect to telos, supposing it to be an inherent property of an object or system. Indeed, one does this every time one speaks of the purpose of an event, rather than speaking of some particular agent desiring the consequences of that event.

I'm vaguely reminded of The Camel Has Two Humps. Perhaps it's the case that some people naturally have a knack for systemisation, while others are doomed to repeat the mind projection fallacy forever.

The teeth example at the beginning is strong because it implies the rest. I skimmed because the conclusions seemed obvious from that example and the following paragraph: intent is the cause, not that which is intended, and teeth are not the kind of things that intend.

Your section on backward causality seems to subsume the argument on mind projection. If the point on backwards causality is that the intent for x is not x itself, that covers most of what you want to say about projecting telos on x. Anthropomorphism would seem to cover the rest, that x is not that kind of thing.

"Similarly with those who hear of evolutionary psychology and conclude that the meaning of life is to increase reproductive fitness - hasn't science demonstrated that this is the purpose of all biological organisms, after all?"

"I call this a "teleological capture" - where someone comes to believe that the telos of X is Y, relative to some agent, or optimization process, or maybe just statistical tendency, from which it follows that any human or other agent who does X must have a purpose of Y in mind."

I think the second paragraph, and specifically the phrase, "in mind," probably paints the wrong picture of the people that hold the first paragraph to be true. They most likely think that increasing reproductive fitness is entirely implicit within the organisms behaviour. Not, "in mind," which would imply a concious goal.

Anyway yes I can see that teleological capture is problematic. However I can't do away with it completely. It seems the only way to be able to try and fix things. Let us say that I started to become besotted with a reborn doll to the extent that I didn't interact with other people. Should I try to stop myself loving the doll? If I ask what love is for, then it seems I should. This seems useful in my book, similarly asking what hunger and my desire for sweet things is for (from an evolutionary point of view) enables me to see that curbing them would be a good idea.

Now I am not consistent in my application of the view (I'm generally nice to people because it seems right to be nice to people), but the corner cases such as should I spend lots of money and attention on a cat (which I find adorable) it gives me something to steer by.

I haven't yet seen how your platonic morality can fill the void left by excising the ability to correct emotions and desires to the purpose for which they evolved.

Should I try to stop myself loving the doll? If I ask what love is for,

Asking what purpose love evolved for...

then it seems I should.

What is doing the seeming here? Is your built-in morality, which makes no direct reference to evolutionary arguments, but which might be swayed by them, evaluating this argument and coming to a conclusion?

If the change in morality suggested by an evolutionary line of reasoning were repugnant to you, would you reject it? Then you're not putting the cart before the horse, and good for you. Eliezer's talking about different people.

This seems useful in my book,

Only to the extent that it gives answers you're happy with by other, more primary criterion. And to that extent, it's just one more kind of moral argument.

"This is just some random text to see if I can get my comment to go through."

Will, forgive this personal question, but have you ever had sex with you or your partner using birth control?

I don't think it's asking "But what happened to the evolutionary purpose?" that tells you not to love a reborn doll. That's just an argument that happens to give the correct answer in this case, but gives the wrong answer in many others. And the proof is that you used "What about the evolutionary purpose?" giving seemingly good answer to support "What about the evolutionary purpose?" as a good question to ask. Why do you expect your audience to already know that it's a bad idea to love a reborn doll, even before they accept your evolutionary argument? This is probably how you know as well.

I expected that my audience would already know that it was a bad idea to love a reborn doll because I expected them to be mostly male. Reborn dolls are marketed to females and everything I have seen about them (not a lot), suggests that males find them wrong. It is possible that we don't have the same sort of machinery for attaching to babies and baby shaped things as females.

But what moral argument could I present to someone who did love their reborn doll? Let me present a brief discussion:

Doller: I love my doll, I want to spend all my money on repainting a room in pink and buying a new cot for it. Moral Functionalist: Can't you see that is wrong? Or at least can't you see that lots of other people think it is wrong, which should give you evidence on the morality function? Doller: Previously people have been agreed that crushing your neighbours tribe was the correct thing to do. At some point someone had to be the one to say no a raid now would not be a good idea (perhaps not saying that it was morally wrong but thinking it). Should he have been convinced by everyone else saying that neighbour slaughtering and daughter taking was the right and proper thing to do? How do you become the first one to strike out on your own to progress morally. Can you definitively say that I am not a pioneer in moral progress? MF: But a reborn doll does not promote the properties of love and growth etc.. Doller: Oh but it does, without it I am listless and have feel I have no purpose with an aching hole in my life needing to be filled. With it I have something to protect and work for. A reborn doll is also a lot less time and effort for me than a real baby allowing me to spend more time working and sleeping well, so I am more productive in my job. Much like someone being in a relationship without a child gets the benefits of being in a relationship without the family obligations. People should be encouraged to have and love real dolls so that they can focus on immortality research without having to find ways to afford money for college for the next generation. MF: ....

Feel free to try and end, or correct the argument to how you would actually take it. I feel I have been weak in the argument for the non-doller point of view, but I don't really get how it is supposed to work.

In answer to your first question, I would use protection, but I have never claimed my meta-morality is consistent and don't drive it to extremes. I just go with what seems to work for me, and not spend too much time and energy on the whole thing. Wasting time and energy is a pretty bad thing to do according to my morality.

Likewise with that fellow who concluded that the purpose of the universe is to increase entropy - the universe does so consistently, therefore it must want to do so - and that this must therefore be the meaning of life. Pretty sad purpose, I'd say!

Was someone seriously advocating that?! I only remember that as the punchline of a webcomic somewhere. (Anyone have the link?)

[-][anonymous]12y10

I don't think the first fallacy is fairly attributed to Aristotle. It would be a mistake to attribute a modern, mechanistic view of causality to Aristotle, and thereby its restrictions (like 'no backwards causality') but that's actually beside the point. Aristotle regarded the 'final cause' of something to be, as it were, simultaneous with it. The final cause of a shovel is to dig, but a shovel need never dig an ounce for this to be true. No reference to an actual future event is necessary. Nor, in fact, do any of Aristotle's causes refer to specific temporal events or objects (except accidentally): Aristotle's physical theory isn't a mechanistic one. He discusses mechanistic physics (mostly in Generation and Corruption) but it's not a big deal for him. He's trying to talk about a different sort of subject matter.

The argument for the second fallacy relies on...well, a fallacy: to say that the ascription of teleology to non-humans is anthropomorphic is to assume that because teleology (on some understanding) is a feature of human reasoning, it must be the case that teleology is a feature only of human reasoning. Aristotle explicitly denies this in Physics II.8. It does not follow from the fact that we have the impression of reasoning teleologically that the ascription of final causes to things is anthropomorphic.

Finally, you say: "More rigorous reasoning would say:

My model predicts that if Mary turns left she will arrive at the supermarket. I don't know her every neuron, but I believe Mary has a model similar to mine. I believe Mary desires to be at the supermarket. I believe that Mary has a planning mechanism similar to mine, which leads her to take actions that her model predicts will lead to the fulfillment of her desires. Therefore I predict that Mary will turn left.

No direct mention of the actual future has been made."

You have made direct mention of the future here, in every significant sense. You've said that Mary 'desires to be at the supermarket': Mary's desire refers to a future state. You say that she's planning and predicting: we don't plan about the past or the present. How can we make sense of a 'planning mechanism' or a 'prediction' except in reference to the future?

You've hidden the references to the future in faculties (desire, planning, prediction) which involve themselves in reference to the future, but your explanation is no less teleological. There's simply no sense to be made of Mary's behavior without reference to the object of her desire, her planning, and her prediction. You're right that this isn't just 'whatever actual future is in store for her', since if she takes a wrong turn, she'll turn around. But how can we make sense of her constantly evaluating and altering her behavior for the sake of going to the store unless we understand it teleologically? How could we make sense of success or failure otherwise?

The consistant mistake here is the assumption that a teleological cause must draw an arrow backwards in time from an actual future state to the actual present. This was not Aristotle's view. On Aristotle's view, the final cause is importantly related to the future, but only in the very senses you appealed to in your revised description of Mary's behavior. You've admitted teleology just as Aristotle would wish to argue for it.