I've previously dwelt in considerable length upon forms of rationalization whereby our beliefs appear to match the evidence much more strongly than they actually do. And I'm not overemphasizing the point, either. If we could beat this fundamental metabias and see what every hypothesis really predicted, we would be able to recover from almost any other error of fact.
The mirror challenge for decision theory is seeing which option a choice criterion really endorses. If your stated moral principles call for you to provide laptops to everyone, does that really endorse buying a $1 million gem-studded laptop for yourself, or spending the same money on shipping 5000 OLPCs?
We seem to have evolved a knack for arguing that practically any goal implies practically any action. A phlogiston theorist explaining why magnesium gains weight when burned has nothing on an Inquisitor explaining why God's infinite love for all His children requires burning some of them at the stake.
There's no mystery about this. Politics was a feature of the ancestral environment. We are descended from those who argued most persuasively that the good of the tribe meant executing their hated rival Uglak. (We sure ain't descended from Uglak.)
And yet... is it possible to prove that if Robert Mugabe cared only for the good of Zimbabwe, he would resign from its presidency? You can argue that the policy follows from the goal, but haven't we just seen that humans can match up any goal to any policy? How do you know that you're right and Mugabe is wrong? (There are a number of reasons this is a good guess, but bear with me here.)
Human motives are manifold and obscure, our decision processes as vastly complicated as our brains. And the world itself is vastly complicated, on every choice of real-world policy. Can we even prove that human beings are rationalizing—that we're systematically distorting the link from principles to policy—when we lack a single firm place on which to stand? When there's no way to find out exactly what even a single optimization criterion implies? (Actually, you can just observe that people disagree about office politics in ways that strangely correlate to their own interests, while simultaneously denying that any such interests are at work. But again, bear with me here.)
Where is the standardized, open-source, generally intelligent, consequentialist optimization process into which we can feed a complete morality as an XML file, to find out what that morality really recommends when applied to our world? Is there even a single real-world case where we can know exactly what a choice criterion recommends? Where is the pure moral reasoner—of known utility function, purged of all other stray desires that might distort its optimization—whose trustworthy output we can contrast to human rationalizations of the same utility function?
Why, it's our old friend the alien god, of course! Natural selection is guaranteed free of all mercy, all love, all compassion, all aesthetic sensibilities, all political factionalism, all ideological allegiances, all academic ambitions, all libertarianism, all socialism, all Blue and all Green. Natural selection doesn't maximize its criterion of inclusive genetic fitness—it's not that smart. But when you look at the output of natural selection, you are guaranteed to be looking at an output that was optimized only for inclusive genetic fitness, and not the interests of the US agricultural industry.
In the case histories of evolutionary science—in, for example, The Tragedy of Group Selectionism—we can directly compare human rationalizations to the result of pure optimization for a known criterion. What did Wynne-Edwards think would be the result of group selection for small subpopulation sizes? Voluntary individual restraint in breeding, and enough food for everyone. What was the actual laboratory result? Cannibalism.
Now you might ask: Are these case histories of evolutionary science really relevant to human morality, which doesn't give two figs for inclusive genetic fitness when it gets in the way of love, compassion, aesthetics, healing, freedom, fairness, et cetera? Human societies didn't even have a concept of "inclusive genetic fitness" until the 20th century.
But I ask in return: If we can't see clearly the result of a single monotone optimization criterion—if we can't even train ourselves to hear a single pure note—then how will we listen to an orchestra? How will we see that "Always be selfish" or "Always obey the government" are poor guiding principles for human beings to adopt—if we think that even optimizing genes for inclusive fitness will yield organisms which sacrifice reproductive opportunities in the name of social resource conservation?
To train ourselves to see clearly, we need simple practice cases.
What is the point of this post? I seem to have missed it entirely. Can anyone help me out?
Is the point that predicting the end result of particular criterion is difficult because bias gets in the way? And, because it is difficult, start small with stuff like gene fitness and work up to bigger problems like social ethics?
Or... is the point that natural selection is a great way to expose the biases at work in our ethics choice criterion?
I am not tracking on something here. This is a summary of the points in the post as I see them:
We are unable to accurately study how closely the results of our actions match our own predictions of those results.
The equivalent problem in decision theory is that we are unable to take a set of known choice criteria and predict which choice will be made given a particular environment. In other words, we think we know what we would/should do in event X but we are wrong.
We possess the ability to predict any particular action from all possible choice criteria.
Is it possible to prove that a particular action does or does not follow from certain choice criteria, thereby avoiding our tendency to predict anything from everything?
We need a bias free system to study that allows us to measure our predictions without interfering with the result of the system.
Natural selection presents a system whose only "goal" is inclusive genetic fitness. There is no bias.
Examples show that our predictions of natural selection reveal biases in ourselves. Therefore, our predictions were biased.
To remove our bias with regards to human ethics, we should use natural selection as a calibration tool.
I feel like the last point skipped over a few points. As best as I can tell, these belong just before the last point:
When our predictions of the bias-proof system are accurate, they will be predictions without bias.
Using the non-biased predictors we found to study the bias-proof system, we can study other systems with less bias.
Using this outline, it seems like the takeaway is, "Don't study ethics until after you studied natural selection because there is too much bias involved in studying ethics."
Can someone tell me if I am correct? A simple yes or no is cool if you don't feel like typing up a whole lot. Even, "No, not even close," will give me more information than I have right now.
Seems about right. Note: "To train ourselves to see clearly, we need simple practice cases."