The core fallacy of anthropomorphism is expecting something to be predicted by the black box of your brain, when its casual structure is so different from that of a human brain, as to give you no license to expect any such thing.

The Tragedy of Group Selectionism (as previously covered in the evolution sequence) was a rather extreme error by a group of early (pre-1966) biologists, including Wynne-Edwards, Allee, and Brereton among others, who believed that predators would voluntarily restrain their breeding to avoid overpopulating their habitat and exhausting the prey population.

The proffered theory was that if there were multiple, geographically separated groups of e.g. foxes, then groups of foxes that best restrained their breeding, would send out colonists to replace crashed populations.  And so, over time, group selection would promote restrained-breeding genes in foxes.

I'm not going to repeat all the problems that developed with this scenario. Suffice it to say that there was no empirical evidence to start with; that no empirical evidence was ever uncovered; that, in fact, predator populations crash all the time; and that for group selection pressure to overcome a countervailing individual selection pressure, turned out to be very nearly mathematically impossible.

The theory having turned out to be completely incorrect, we may ask if, perhaps, the originators of the theory were doing something wrong.

"Why be so uncharitable?" you ask.  "In advance of doing the experiment, how could they know that group selection couldn't overcome individual selection?"

But later on, Michael J. Wade went out and actually created in the laboratory the nigh-impossible conditions for group selection.  Wade repeatedly selected insect subpopulations for low population numbers.  Did the insects evolve to restrain their breeding, and live in quiet peace with enough food for all, as the group selectionists had envisioned?

No; the adults adapted to cannibalize eggs and larvae, especially female larvae.

Of course selecting for small subpopulation sizes would not select for individuals who restrained their own breeding.  It would select for individuals who ate other individuals' children.  Especially the girls.

Now, why might the group selectionists have not thought of that possibility?

Suppose you were a member of a tribe, and you knew that, in the near future, your tribe would be subjected to a resource squeeze.  You might propose, as a solution, that no couple have more than one child - after the first child, the couple goes on birth control.  Saying, "Let's all individually have as many children as we can, but then hunt down and cannibalize each other's children, especially the girls," would not even occur to you as a possibility.

Think of a preference ordering over solutions, relative to your goals.  You want a solution as high in this preference ordering as possible.  How do you find one?  With a brain, of course!  Think of your brain as a high-ranking-solution-generator - a search process that produces solutions that rank high in your innate preference ordering.

The solution space on all real-world problems is generally fairly large, which is why you need an efficient brain that doesn't even bother to formulate the vast majority of low-ranking solutions.

If your tribe is faced with a resource squeeze, you could try hopping everywhere on one leg, or chewing off your own toes.  These "solutions" obviously wouldn't work and would incur large costs, as you can see upon examination - but in fact your brain is too efficient to waste time considering such poor solutions; it doesn't generate them in the first place.  Your brain, in its search for high-ranking solutions, flies directly to parts of the solution space like "Everyone in the tribe gets together, and agrees to have no more than one child per couple until the resource squeeze is past."

Such a low-ranking solution as "Everyone have as many kids as possible, then cannibalize the girls" would not be generated in your search process.

But the ranking of an option as "low" or "high" is not an inherent property of the option, it is a property of the optimization process that does the preferring.  And different optimization processes will search in different orders.

So far as evolution is concerned, individuals reproducing to the fullest and then cannibalizing others' daughters, is a no-brainer; whereas individuals voluntarily restraining their own breeding for the good of the group, is absolutely ludicrous.  Or to say it less anthropomorphically, the first set of alleles would rapidly replace the second in a population.  (And natural selection has no obvious search order here - these two alternatives seem around equally simple as mutations).

Suppose that one of the biologists had said, "If a predator population has only finite resources, evolution will craft them to voluntarily restrain their breeding - that's how I'd do it if I were in charge of building predators."  This would be anthropomorphism outright, the lines of reasoning naked and exposed:  I would do it this way, therefore I infer that evolution will do it this way.

One does occasionally encounter the fallacy outright, in my line of work.  But suppose you say to the one, "An AI will not necessarily work like you do".  Suppose you say to this hypothetical biologist, "Evolution doesn't work like you do."  What will the one say in response?  I can tell you a reply you will not hear:  "Oh my! I didn't realize that!  One of the steps of my inference was invalid; I will throw away the conclusion and start over from scratch."

No: what you'll hear instead is a reason why any AI has to reason the same way as the speaker.  Or a reason why natural selection, following entirely different criteria of optimization and using entirely different methods of optimization, ought to do the same thing that would occur to a human as a good idea.

Hence the elaborate idea that group selection would favor predator groups where the individuals voluntarily forsook reproductive opportunities.

The group selectionists went just as far astray, in their predictions, as someone committing the fallacy outright.  Their final conclusions were the same as if they were assuming outright that evolution necessarily thought like themselves.  But they erased what had been written above the bottom line of their argument, without erasing the actual bottom line, and wrote in new rationalizations.  Now the fallacious reasoning is disguised; the obviously flawed step in the inference has been hidden - even though the conclusion remains exactly the same; and hence, in the real world, exactly as wrong.

But why would any scientist do this?  In the end, the data came out against the group selectionists and they were embarrassed.

As I remarked in Fake Optimization Criteria, we humans seem to have evolved an instinct for arguing that our preferred policy arises from practically any criterion of optimization.  Politics was a feature of the ancestral environment; we are descended from those who argued most persuasively that the tribe's interest - not just their own interest - required that their hated rival Uglak be executed.  We certainly aren't descended from Uglak, who failed to argue that his tribe's moral code - not just his own obvious self-interest - required his survival.

And because we can more persuasively argue, for what we honestly believe, we have evolved an instinct to honestly believe that other people's goals, and our tribe's moral code, truly do imply that they should do things our way for their benefit.

So the group selectionists, imagining this beautiful picture of predators restraining their breeding, instinctively rationalized why natural selection ought to do things their way, even according to natural selection's own purposes. The foxes will be fitter if they restrain their breeding!  No, really! They'll even outbreed other foxes who don't restrain their breeding! Honestly!

The problem with trying to argue natural selection into doing things your way, is that evolution does not contain that which could be moved by your arguments.  Evolution does not work like you do - not even to the extent of having any element that could listen to or care about your painstaking explanation of why evolution ought to do things your way.  Human arguments are not even commensurate with the internal structure of natural selection as an optimization process - human arguments aren't used in promoting alleles, as human arguments would play a causal role in human politics.

So instead of successfully persuading natural selection to do things their way, the group selectionists were simply embarrassed when reality came out differently.

There's a fairly heavy subtext here about Unfriendly AI.

But the point generalizes: this is the problem with optimistic reasoning in general.  What is optimism?  It is ranking the possibilities by your own preference ordering, and selecting an outcome high in that preference ordering, and somehow that outcome ends up as your prediction.  What kind of elaborate rationalizations were generated along the way, is probably not so relevant as one might fondly believe; look at the cognitive history and it's optimism in, optimism out.  But Nature, or whatever other process is under discussion, is not actually, causally choosing between outcomes by ranking them in your preference ordering and picking a high one.  So the brain fails to synchronize with the environment, and the prediction fails to match reality.

New Comment
60 comments, sorted by Click to highlight new comments since:

Things may think very differently, but we only have to really worry about them if they think correctly, or close to it. (at our best, we think close to correctly.)

We can reason that the shortest distance between two points is a straight line. This also happens to fit our intuitions, which have nothing to do with our logic. Another mind could have other intuitions that also approximate the right answer. But if all the rituals of thought they have make them treat some curve as the shortest path, then we would beat them in a race.

If anything doesn't think perfectly rationally, then it has some cognitive hole which we might exploit. IF we can understand it, and avoid our own errors.

james andrix: we have to worry about what other Optimizers want, not just if they "think correctly". Evolution still manages to routinely defeat us without being able to think at all.

This seems relevant to the Pascal discussion, as various interlocutors try to argue with Unknown's claim that negative (egoist) consequentialist reasoning leads conveniently to psychologically comforting Christianity. While I would say that Unknown's position reflects this bias, it seems similar wishful thinking (that maximizing consequentialism will match the current values of much of OB's atheist readership) plays a role in many of the dismissals of Pascalian arguments. Despite making a number of arguments that negative utilitarianism or negative egoism do not lead to Unknown's conclusion, I remain especially wary of claims that a consequentialist would produce intuitively acceptable outcomes rather than dismantling the universe looking for the Dark Lords of the Matrix or magic physics to generate great utility.

[-]Roko00

Larry D'Anna: we have to worry about what other Optimizers want, not just if they "think correctly".

I argue that if there exist objective values which are implicit in the structure of our universe, then some significant fraction of possible minds will approximate those objective values.

However, those objective values probably differ quite a lot from most of what most human beings find important in their lives; for example our obsessions with sex, romance and child-rearing probably aren't in there.

Having re-read Eliezer's work on "the bottom line", and this piece on optimism, I am re-assesing very carefully what I think those objective values might look like. I am going to try very hard to make sure I don't simply rationalize those things that I have already decided (or been genetically programmed to think) are valuable.

Is there a good way to overcome a bias - such as the above - that one is acutely aware of?

"And because we can more persuasively argue, for what we honestly believe, we have evolved an instinct to honestly believe that other people's goals, and our tribe's moral code, truly do imply that they should do things our way for their benefit."

Great post overall, but I'm skeptical of this often-repeated element in OB posts and comments. I'm not sure honest believers always, or even usually, have a persuasion advantage. This reminds me of some of Michael Vassar's criticism of nerds thinking of everyone else as a defective nerd (nerd defined as people who value truth-telling/sincerity over more political/tactful forms of communication.

I remain especially wary of claims that a consequentialist would produce intuitively acceptable outcomes rather than dismantling the universe looking for the Dark Lords of the Matrix or magic physics to generate great utility.

Me too.

Eliezer.... This post terrifies me. How on earth can humans overcome this problem? Everyone is tainted. Every group is tainted. It seems almost fundementally insurrmountable... What are your reasons for working on fAI yourself and not trying to prevent all others working on gAI from succeeding? Why could you succeed? Life extesnion technologies are progressing fairly well without help from anything as dangerous as an AI.

Regarding anthropomorphism of non-human creatures, I was thoroughly fascinated this morning by a fuzzy yellow catepillar in central park that was progressing rapidly (2 cm/s) across a field, over, under, and around obstacles, in a seemingly straight line. After watching its pseudo-sinusoidal body undulations and the circular twisting of its moist, pink head with two tiny black and white mouth parts for 20 minutes, I moved it to another location, after which it changed its direction to crawl in another staight line. After forward projecting where the two lines would intersect, I determined the catepillar was heading directly towards a large tree with multi-rounded-point leaves about 15 feet in the distance. I moved the catepillar on a leaf (not easy, the thing moved very quickly, and I had to keep rotating the leaf) to behind the tree, and sure enough, it turned around, crawled up the tree, into a crevice, and burrowed in with the full drilling force of its furry, little body.

Now, from a human point of view, words like 'determined,' 'deliberate,' 'goal-seeking,' might creep in, especially when it would rear its bulbous head in a circle and change directions, yet I doubt the catepillar had any of these menal constructs. It was, like the moth it must turn into, probably sensing some chemoattractant from the tree... maybe it's time for it to make a crysalis inside the tree and become a moth or butterfly, and some program just kicks in when it's gotten strong and fat enough, as this thing clearly was. But 'OF COURSE' I thought. C. elegans, a much simpler creature, will change its direction and navigate simple obstacles when an edible proteinous chemoattractant is put in its proximity. The cattepillar is just more advanced at what it does. We know the location and connections of every one of C. elegans 213 neurons... Why can't we make a device that will do the same thing yet? Too much anthropomorphism?

How on earth can humans overcome this problem?

Why, eugenics of course ! The only way to change our nature.

First, selective breeding. Then genetic engineering.

Yes, there is a risk of botching it. No, we don't have a better solution.

Eliezer.... This post terrifies me. How on earth can humans overcome this problem? Everyone is tainted. Every group is tainted. It seems almost fundamentally insurmountable...

And yet somehow, bridges stay up.

So you go in search of a mathematics of FAI that produces precision-grade predictions.

However, you're correct that all vaguely optimistic arguments produced by various meddling dabblers about how friendly they expect their AI design to be, can be tossed out the window.

It's not a challenge we have the option of avoiding, either.

Carl:"Unknown's [sic] claim that negative (egoist) consequentialist reasoning leads conveniently to psychologically comforting Christianity. [...] do not lead to Unknown's [sic] conclusion [...]"

Correction: this is Utilitarian's claim.

All of this reminds me of something I read in Robert Sapolski's book "Monkey Luv" (a really fluffy pop-sci book about baboon society, though Sapolski himself in person is quite amazing), about how human populations under different living conditions had almost predictable (at least in hindsight)explicative religions. People living in rainforests with many different creatures struggling at cross-purposes to survive developed polytheistic religions in which gods routinely fought and destroyed each other for their own goals. Desert dwellers (semites) saw only one great expanse of land, one horizon, one sky, one ecosystem, and so invented monotheism.

I wonder what god(s) we 21st century American rationalists will invent...

"I wonder what god(s) we 21st century American rationalists will invent..."

A degenerated Singularitarianism would fit the bill.

Roko: What would it even mean for an objective value to be implicit in the structure of the universe? I'm having a hard time imagining any physical situation where that would even make sense. And even if it did, it would still be you that decides to follow that value. Surely if you discovered an objective value implicit in the structure of the universe that commanded you to torture kittens, you would ignore it.

Z.M. Davis,

Occam's razor makes me doubt that we have two theoretical negative utilitarians (with egoistic practice) who endorse Pascal's wager, with similar writing styles and concerns, bearing Internet handles that begin with 'U.'

Larry D'Anna: Yes, if the agent is powerful you might be killed even by mindless thrashings.

My point was that in an adversarial situation, you should assume your opponent will always make perfect choices. Then their mistakes are to your advantage. If you're ready for optimal thrashings, random thrashings will be easier. If you know your opponent will make some kind of mistake, then you can step a trap. But for that you have to understand it very well.

If we are up against an AI that always makes the right choice, then it will almost certainly get what it wants.

If we're going to say that evolution doesn't think, then we should also say it doesn't want. Arguably wanting is anthropomorphizing.

I think we can still use the plan-for-the-strogest-possible-attack strategy though, because we can define it in terms of what we want. (but we still have to look at how the opponent 'thinks', not how we do)

Roko: then some significant fraction of possible minds will approximate those objective values.

I would say that effective minds would find those values as useful strategies. (and for this reason evolved minds would approximate them.)

But all possible minds would be all over the place.

And yet, you keep using computation metaphors to talk about brains: "preference ordering", "solution space", "relative to goals", and so on. If you are criticizing a given attitude, shouldn't you avoid it? And given your own biases, should you really be so demeaning to the collective-evolution proponents? Sorry, but it smells like you are predicting the downfall of their theories after the fact.

However, both Unknown and Utilitarian claim to be distinct, and I won't contest their claims.

Marcio: Computation is a generalization that extends to the brain. The brain is not a generalization that extends to computational processes. This is not a two-way street.

Humans faced with resource constraints did find the other approach.

Traditionally, rather than restrict our own breeding, our response has been to enslave our neighbors. Force them to work for us, to provide resources for our own children. But don't let them have children. Maybe castrate the males, if necessary kill the female's children. (It was customary to expose deformed or surplus children. If a slave does get pregnant, who's child is surplus?)

China tried the "everybody limit their children" approach. Urban couples were allowed one child, farm couples could have two. Why the difference? China officially did not have an "other" to enslave. They had to try to make it fair. But why would the strong be fair to the weak? Slaves are bred when there's so much room to expand that the masters' children can't fill the space, and another requirement is that it's easier, cheaper, or safer to breed them than to capture more.

Traditionally slavery was the humane alternative to genocide.

Why didn't Eliezer think that way? My guess is that he is a good man and so he supposed that human populations would think in terms of what's good and fair for everyone, the way he does.

He applied anthropomorphic optimism to human beings.

Great comments thread! Thanks all!

Seconding Roko, Carl, HA, Nick T, etc.

Eliezer or Robin: Can you cite evidence for "we can more persuasively argue, for what we honestly believe". My impression is that it has been widely assumed in evolutionary psychology and fairly soundly refuted in the general psychology of deception, which tells us that the large majority of people detect lies at about chance and that similar effort seems to enable the development of the fairly rare skill of the detection of lies and evasion of such detection.

Carl: Unknown and Utilitarian could be distinct but highly correlated (we're both here after all). In principle we could see them as both unpacking the implications of some fairly simple algorithm. Have you noticed them both making the same set of mistakes in their efforts to understand Bayesian reasoning, anthropics, decision theory, etc? Still could be the same program running on different sets of wetware.

"However, those objective values probably differ quite a lot from most of what most human beings find important in their lives; for example our obsessions with sex, romance and child-rearing probably aren't in there."

Several years ago, I was attracted to pure libertarianism as a possible objective morality for precisely this reason. The idea that, eg., chocolate tastes good can't possibly be represented directly in an objective morality, as chocolate is unique to Earth and objective moralities need to apply everywhere. However, the idea of immorality stemming from violation of another person's liberty seemed simple enough to arise spontaneously from the mathematics of utility functions.

It turns out that you do get a morality out of the mathematics of utility functions (sort of), in the sense that utility functions will tend towards certain actions and away from others unless some special conditions are met. Unfortunately, these actions aren't very Friendly; they involve things like turning the universe into computronium to solve the Riemann Hypothesis (see http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf for some examples). If libertarianism really was a universal morality, Friendly AI would be much simpler, as we could fail on the first try without the UFAI killing us all.

typo in the post, surely..

"Wade repeatedly selected insect subpopulations for low numbers of adults per subpopulation"

Didn't he in fact select sub-populations with low numbers of infants? Or am I misunderstanding completely.

The selection pressure is supposed to be lower adult populations, how many infants doesn't really matter. Selecting on fewer infants (I assume you mean larva here - if not what's the difference?) would force the expected result (restricted breeding of some kind) instead of allowing multiple potential paths to achieve the same goal. Falsification must be a possibility.

Michael: both making the same set of mistakes in their efforts to understand Bayesian reasoning, anthropics, decision theory, etc?

Would you like to elaborate on what those mistakes are (thereby helping out this simple but misguided algorithm)? If there are links to existing sources, that would be enough.

Vassar, surprised to see you seconding Roko. Did you confuse his post with someone else's?

Roko, convergent instrumental values are neither universalizable nor objective. A paperclip maximizer and a cheesecake maximizer both want to survive, and to capture all the resources in the universe; they don't want each other to survive, and would disagree about how to allocate the resources. Even if they negotiate, they would still prefer the other dead; and we humans would prefer that both of them never exist in the first place.

J Thomas, I didn't say that no human wouldn't think of cannibalizing girls, I'm saying it wouldn't occur to you, and did not in fact occur to the group selectionists. Anthropomorphism can just as easily be 1st-World-o-morphism and usually is.

The "mistake" Michael is talking about it the belief that utility maximization can lead to counter intuitive actions, in particular actions that humanly speaking are bound to be useless, such as accepting a Wager or a Mugging.

This is in fact not a mistake at all, but a simple fact (as Carl Shulman and Nick Tarleton suspect.) The belief that it does not is simply a result of Anthropomorphic Optimism as Eliezer describes it; i.e. "This particular optimization process, especially because it satisfies certain criteria of rationality, must come to the same conclusions I do." Have you ever considered the possibility that your conclusions do not satisfy those criteria of rationality?

Lara Foster: We know the location and connections of every one of C. elegans 213 neurons... Why can't we make a device that will do the same thing yet? Too much anthropomorphism?

Too much anthropomorphism is precisely what a lot of AI research looks like to me (Google ["cognitive architecture"] for the sort of thing I mean), although C. elegans isn't a good example of what we can't do. All it takes to climb concentration gradients is a biased random walk, and light-seeking is a standard project with hobby robot kits.

Building it the size of a caterpillar, let alone a flatworm, is a more challenging task, but not one of AI. (However, I would not be surprised if some AI researcher were to claim that smallness is the key missing ingredient, as has been claimed in the past for computational power, parallelism, analog computation, emotions, and embodiment.)

We don't know how brains work, and our view from the inside doesn't tell us.

Carl Shulman: Occam's razor makes me doubt that we have two theoretical negative utilitarians (with egoistic practice) who endorse Pascal's wager, with similar writing styles and concerns, bearing Internet handles that begin with 'U.' michael vassar: Unknown and Utilitarian could be distinct but highly correlated (we're both here after all). In principle we could see them as both unpacking the implications of some fairly simple algorithm.

With thousands of frequent-poster-pairs with many potentially matchable properties, I'm not too shocked to find a pair that match on six mostly correlated properties.

Kennaway- I meant why can't we make something that C. elegans does in the same way that C. elegans does it using it's neural information. Clearly our knowledge must be incomplete in some respect. If we could do that, then imitating not only the size, but the programming of the caterpillar would be much more feasible. At least three complex programs are obvious: 1) crawl -coordinated and changeable sinusoidal motion seems a great way to move, yet the MIT 'caterpillar' is quite laughable in comparison to the dexterity of the real thing, 2)Seek- this involves a circular motion of the head, sensing some chemical signal, and changing directions accordingly, 3) Navigate- caterpillar is skillfully able to go over, under, and around objects, correcting its path to its original without doing the weird head-twirling thing, indicating that aside from chemeoattraction, it has some sense of directional orientation, which it must have or else its motion would be a random walk with correction and not a direct march. I wonder how much of these behaviors operate independently of the brain.

Lara Foster, what do you put as the odds that our knowledge is complete, but no one has tried to make something directly copying C elegans?

From Nick Bostrom's paper on infinite ethics:

"If there is an act such that one believed that, conditional on one’s performing it, the world had a 0.00000000000001% greater probability of containing infinite good than it would otherwise have (and the act has no offsetting effect on the probability of an infinite bad), then according to EDR one ought to do it even if it had the certain side‐effect of laying to waste a million human species in a galactic‐scale calamity. This stupendous sacrifice would be judged morally right even though it was practically certain to achieve no good. We are confronted here with what we may term the fanaticism problem."

Later:

"Aggregative consequentialism is often criticized for being too “coldly numerical” or too revisionist of common morality even in the more familiar finite context. Suppose that I know that a certain course of action, though much less desirable in every other respect than an available alternative, offers a one‐in‐a‐million chance of avoiding catastrophe involving x people, where x is finite. Whatever else is at stake, this possibility will overwhelm my calculations so long as x is large enough. Even in the finite case, therefore, we might fear that speculations about low‐probability‐high‐stakes scenarios will come to dominate our moral decision making if we follow aggregative consequentialism."

Exactly. Utility maximizing together with an unbounded utility function necessarily lead to what Nick calls fanaticism. This is the usual use of the term: people call other people fanatics when their utility functions seem to be unbounded.

As Eliezer has pointed out, it is a dangerous sign when many people agree that something is wrong without agreeing why; we see this happening in the case of Pascal's Wager and Pascal's Mugging. In reality, a utility maximizer with an unbounded utility function would accept both. The readers of this blog, being human, are not utility maximizers. But they are unwilling to admit it because certain criteria of rationality seem to require being such.

My point was that in an adversarial situation, you should assume your opponent will always make perfect choices. Then their mistakes are to your advantage. If you're ready for optimal thrashings, random thrashings will be easier.

It isn't that simple. When their perfect choice mean you lose, then you might as well hope they make mistakes. Don't plan for the worst that can happen, plan for the worst that can happen which you can still overcome.

One possible mistake they can make is to just be slow. If you can hit them hard before they can react, you might hurt them enough to get a significant advantage. Then if you keep hitting them hard and fast they might reach the point they're trying to defend against the worst you can do. While they are trying to prepare for the worst attack you can make, you hit them hard with the second-worst attack that they aren't prepared for. Then when they try to defend against whatever they think you'll do next, you do something else bad. It's really ideal when you can get your enemy into that stance.

Of course it's even better when you can persuade them not to be your enemy in the first place. If you coudl do that reliably then you would do very well. But it's hard to do that every time. The enemy gets a vote.

"If there is an act such that one believed that, conditional on one’s performing it, the world had a 0.00000000000001% greater probability of containing infinite good than it would otherwise have (and the act has no offsetting effect on the probability of an infinite bad), then according to EDR one ought to do it even if it had the certain side‐effect of laying to waste a million human species in a galactic‐scale calamity.

The assumption is that when you lay waste to a million human species the bad that is done is finite.

Is there solid evidence for that? If there's any slightest chance that it will result in infinite bad, then the problem is much more complicated.

There has to be not the 0.00000000000001% probability that your evil might be infinite, before this reasoning makes sense.

Doug- too much stuff to put into an actual calculation, but I doubt we have complete knowledge, given how little we understand epigentics (iRNA, 22URNAs, and other micro RNAs), synaptic transcription, cytoskeletal transport, microglial roles, the enteric nervous system, native neuro-regeneration, and low and behold, neurotransmitters themselves. The 3rd edition of Kandel I was taught out of as an undergrad said nothing of orexins, histamine, the other roles of meletonin beyond the pineal gland, or the functions of the multifarious set of cannibinoid receptors, yet we now know (a short 2 years later) that all of these transmitters seem to play critical roles. Now, not being an elegans gal, I don't know if it has much simpler neurotransmission than we do. I would venture to guess it is simpler, but not extraordinarily so, and probably much more affected by simple epigenetic mechanisms like RNA interference. In C. elegans, iRNA messages are rapidly amplified, quickly shutting off the target gene in all of its cells (mammals have not been observed to amplify). Now, here's the kicker- it gets into the germ cells too! So offspring will also produce iRNAs and shut off genes! Now, due to amplification error, iRNAs are eventually lost if the worms are bred long enough, but Craig Mellow is now exploring the hypothesis that amplified iRNAs can be eventually permenantly disable (viral) DNA that's been incorporated into the C. elegans genome, either by more permenant epigenetic modification (methylation or coiling), or by splicing it out... Sorry for the MoBio lecture, but DUDE! This stuff is supercool!!!

Actually, if you want a more serious answer to your question, you should contact Sydney Brenner or Marty Chalfie, who actually worked on the C. elegans projects. Brenner is very odd and very busy, but Chalfie might give you the time of day if you make him feel important and buy him lunch.... Marty is an arrogant sonuvabitch. Wouldn't give me a med school rec, because he claimed not to know anything about me other than that I was the top score in his genetics class. I was all like, "Dude! I was the one who was always asking questions!" And he said, "Yes, and then class would go overtime." Lazy-Ass-Sonuvabitch... But still a genius.

If there's any slightest chance that it will result in infinite bad, then the problem is much more complicated.

There's always a nonzero chance that any action will cause an infinite bad. Also an infinite good. Even with finite but unbounded utility functions, this divergence occurs.

Utility maximizing together with an unbounded utility function necessarily lead to what Nick calls fanaticism.

Bounded utility functions have counterintuitive results as well. Most of these only show up in rare (but still realistic) global "what sort of world should we create" situations, but there can be local effects too; as I believe Carl Shulman pointed out, bounded utility causes your decisions to be dominated by low-probability hypotheses that there are few people (so your actions can have a large effect.)

Nick, can you explain how that happens with bounded utility functions? I was thinking basically something like this: if your maximum utility is 1000, then something that has a probability of one in a million can't have a high expected value or disvalue, because it can't be multiplied by more than 1000, and so the expected value can't be more than 0.001.

This seems to me the way humans naturally think, and the reason that sufficiently low-probability events are simply ignored.

[-]steven-10

My argument for bounded utility is this: if a mind is so far into the future that specifying where it is takes as much information input as generating the mind from scratch, in what sense is it more present-in-the-world than the mind-from-scratch is?

[-]Roko00

@ Unknown: "The readers of this blog, being human, are not utility maximizers. But they are unwilling to admit it because certain criteria of rationality seem to require being such."

  • I am proud to admit that I am not a utility maximizer. I'm actually busy writing a blog post on why utility maximization is a misguided ethical theory.
[-]Roko00

@Eliezer: Vassar, surprised to see you seconding Roko. Did you confuse his post with someone else's? Roko, convergent instrumental values are neither universalizable nor objective.

  • What exactly do you mean by these words? I fear that we may be using the same words to mean different things, hence our disagreements...

A paperclip maximizer and a cheesecake maximizer both want to survive, and to capture all the resources in the universe; they don't want each other to survive, and would disagree about how to allocate the resources. Even if they negotiate, they would still prefer the other dead;

  • I agree with this, but why do you think that this is important?

and we humans would prefer that both of them never exist in the first place.

  • I agree with this, I don't advocate creating a paperclip maximizer.
[-]Roko00

@Tom McCabe: It turns out that you do get a morality out of the mathematics of utility functions (sort of), in the sense that utility functions will tend towards certain actions and away from others unless some special conditions are met. Unfortunately, these actions aren't very Friendly; they involve things like turning the universe into computronium to solve the Riemann Hypothesis (see http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf for some examples).

  • I'm not actually focusing on the values/ethics/morality that you can get out of utility functions, I'm asking the more general question of what values/ethics/morality you can get out of the mathematics of an agent with goals interacting with an environment. Utilitarian agents are just one example of such agents.

I think that the canonical set of instrumental values that Omohundro, Hollerith and myself have been talking about have perhaps been slated more than they deserve. To me, it seems that the four "basic drives" - Self-preservation, Acquisition, Efficiency, Creativity, embody precisely the best aspects of human civilization. Not the best aspects of individual behavior, mind you, which is a different problem.

But I think that we would all like to see a civilization that acquired more free energy and space, worked harder to preserve its own existence (I think some people at oxford might have had a small gathering about that one recently), used its resources more efficiently, and strove for a greater degree of creativity. In fact I cannot think of a more concise and general description of the goals of transhumanism than Omohundro's basic AI drives, where the "agent" is our entire human civilization.

"Nick, can you explain how that happens with bounded utility functions? I was thinking basically something like this: if your maximum utility is 1000, then something that has a probability of one in a million can't have a high expected value or disvalue, because it can't be multiplied by more than 1000, and so the expected value can't be more than 0.001." Suppose that your utility function is U=1-1/X where X is the number of cheesecakes produced, so your utility is bounded between 0 and 1. Quantum physics, modal realism, and other 'Big World' theories indicate that X approaches infinity. Therefore, conditional on the world being anything like it appears to be, the utility of any action is infinitesimal. If there is a one in a million subjective probability that no cheesecakes exist and you can create some (because of weird, probably false physical theories, claims about the Dark Lords of the Matrix, etc) then the expected value of pursuing the routes conditional on those weird theories will be ~1/1,000,000, absurdly greater than any course of action conditional on 'normal' understanding.

Since we don't currently have utility functions, should we assign vastly greater weight to scenarios where our values renormalize to something resembling an unbounded utility function than a bounded one?

steven: is there a similar spatial argument to bound the utility attainable at any given time?

BTW, I'm disappointed Eliezer hasn't addressed "...a consequentialist would produce intuitively acceptable outcomes rather than dismantling the universe looking for the Dark Lords of the Matrix or magic physics to generate great utility."

Carl, that's not a feature of all bounded utility functions. If you discount exponentially for time (or more plausibly for the algorithmic complexity of locating an observer-moment in the universe, and I might want to do this for reasons independent of Pascal) then the value of something does not depend on how much of that something is already in existence, but it still seems like you couldn't get to infinite utilities.

Nick, it seems like if you discounted for algorithmic complexity that should imply some way to discount for extreme distances in space, yes. Not sure how exactly.

Eliezer: I think that you misunderstand Roko, but that doesn't really matter, as he seems to understand you fairly well right now and to be learning effectively.

Unknown: Not at all. Utility maximization is very likely to lead to counterintuitive actions, and might even lead to humanly useless ones, but the particular actions it leads to are NOT whatever salient actions you wish to justify but are rather some very specific set of actions that have to be discovered. Seriously, you NEED to stop reasoning with rough verbal approximations of the math and actually USE the math instead. Agreement emerges from Bayes, but the mere fact that you call something agreement doesn't strongly suggest that it is Bayesian. Seriously I have tried to communicate with you, Carl has tried, and Nick has tried. You aren't interested in figuring out the actual counter-intuitive consequences of your beliefs, as you are too afraid that you would have to act, you know, counter-intuitively, which in actuality you don't do at all. As far as I can tell you aren't worth any of us wasting any more time on.

Lara: invertebrates are, I believe, generally believed to have MUCH simpler sets of neurotransmitters than vertebrates. Think how few genes fruit flies have.

Elezier wrote: > Vassar, surprised to see you seconding Roko

Eliezer: I think that you misunderstand Roko, but that doesn't really matter, as he seems to understand you fairly well right now and to be learning effectively.

So WHO is Roko?

An ex-member who deleted all of his posts.

Thanks, I googled for roko + lesswrong and after reading some comments this whole Roko-affair seems to be kinda creepy...

A pity, mostly - he had a lot of useful contributions, got in an argument with Eliezer, and left after deleting all of his posts. I think he came back briefly a few months afterwards to clarify some of his positions, but didn't stay long. It's the kind of drama that seems unfortunately common in online communities :P

There's always a nonzero chance that any action will cause an infinite bad. Also an infinite good.

Then how can you put error bounds on your estimate of your utility function?

If you say "I want to do the bestest for the mostest, so that's what I'll try to do" then that's a fine goal. When you say "The reason I killed 500 million people was that according to my calculations it will do more good than harm, but I have absolutely no way to tell how correct my calculations are" then maybe something is wrong?

Roko: it's good to see that there is at least one other human being here.

Carl, thanks for that answer, that makes sense. But actually I suspect that normal humans have bounded utility functions that do not increase indefinitely with, for example, cheese-cakes. Instead, their functions have an absolute maximum which is actually reachable, and nothing else that is done will actually increase it.

Michael Vassar: Actually in real life I do some EXTREMELY counterintuitive things. Also, I would be happy to know the actual consequences of my beliefs. I'm not afraid that I would have to act in any particular way, because I am quite aware that I am a human being and do not have to act according to the consequences of my beliefs unless I want to. I often hold beliefs without acting on them, in fact.

If there is a 90% chance that utility maximization is correct, and a 10% chance that Roko is correct (my approximate estimates), how should one act? You cannot simply "use the math", as you suggest, because conditional on the 10% chance, you shouldn't be using the math at all.

Are we really still beating up on group selectionism here, Eliezer?

I think this fallacy needs to be corrected. Yes, group selection is real. Maybe not in the anthropomorphic way of organisms "voluntarily" restraining their breeding, but in terms of adaptation, yes, individual genomes will adapt to survive better as per the requirements of the group. They have no choice BUT to do this, else they go extinct.

The example Eliezer gave of insect populations being selected for low population, actually proves group selectionism. Why? Because it doesn't matter that the low group population was achieved by cannibalism, so long as the populations were low so that their prey-population would not crash.

Saying group selection isn't real is as fallacious as saying a “Frodo” gene cannot exist, despite the fact that it does, in reality.

Can we correct these misconceptions yet?

The criticism as I read it isn't against group selection in general - just looking at Eliezer's examples should tell you that he believes a type of group selection can and does exist.

The initial idea behind group selection, however, was that genes would be selected for that were detrimental to the individual, yet positive for the group. Wade's experiment proved this wrong, without eliminating the idea of group selection altogether.

This is what Eliezer is saying is an evolutionary fairy tale. When group selection occurs, it absolutely must occur via a mechanism that gives individual genes an advantage. It cannot occur via allele sacrifice without in some way increasing the survival of the allele, because that allele will decrease in the population, preventing future sacrifice.

It's the bias that led to the hypothesis that is obviously wrong in hindsight, and that is what Eliezer speaks against, not group selection in general (though I do think he thinks group selection isn't nearly as influential as group selectionists wish it were). Anthropomorphic optimism is the reason group selectionists first hypothesized the pretty picture of restrained breeding, when the strategy that makes the most sense evolutionarily is cannibalism, and if they had been aware of their bias they may have actually predicted the optimal strategy before performing the experiment, instead of being so completely wrong.

Michael- ah yes, that makes a lot of sense. Of course if the worm's only got 213 neurons, it's not going to have hundreds of neurotransmitters. That being said, it might have quite a few different receptor sub-types and synaptic modification mechanisms. Even so... It would seem theoretically feasible to me for someone to hook up electrodes to each neuron at a time and catalog not only the location and connections of each neuron, but also what the output of each synapse is and what the resulting PSPs are during normal C. elegans behaviors... Now that's something I should tell Brenner about, given his penchant for megalomaniacal information gathering projects (he did the C. Elegans genome, a catalog of each cell in its body throughout its embryonic development, and its neural connections).

[-][anonymous]00

ds

[This comment is no longer endorsed by its author]Reply

But later on, Michael J. Wade went out and actually created in the laboratory the nigh-impossible conditions for group selection.  Wade repeatedly selected insect subpopulations for low population numbers.  Did the insects evolve to restrain their breeding, and live in quiet peace with enough food for all, as the group selectionists had envisioned?

No; the adults adapted to cannibalize eggs and larvae, especially female larvae.

What would have happened if Wade had also repeatedly selected subpopulations for not doing that?

Such a low-ranking solution as "Everyone have as many kids as possible, then cannibalize the girls" would not be generated in your search process.

Like...  "A Modest Proposal"?  I would suggest that low-ranking solutions are very often generated and are simply discarded without comment in the vast majority of cases.  The only way "efficiency" enters into it comes from the way we start our search for solutions by considering how to adapt already known solutions to similar problems.

This does, in fact, show up in evolution as well.  Adapting existing solutions is far more common than inventing something entirely new.  Like how the giraffe has the same number of cervical vertebra as (so far as I know) every other four-legged mammal on the planet.