Thou Art Godshatter

by Eliezer Yudkowsky5 min read13th Nov 200775 comments

119

EvolutionWorld Modeling
Frontpage

Before the 20th century, not a single human being had an explicit concept of "inclusive genetic fitness", the sole and absolute obsession of the blind idiot god. We have no instinctive revulsion of condoms or oral sex. Our brains, those supreme reproductive organs, don't perform a check for reproductive efficacy before granting us sexual pleasure.

Why not? Why aren't we consciously obsessed with inclusive genetic fitness? Why did the Evolution-of-Humans Fairy create brains that would invent condoms? "It would have been so easy," thinks the human, who can design new complex systems in an afternoon.

The Evolution Fairy, as we all know, is obsessed with inclusive genetic fitness. When she decides which genes to promote to universality, she doesn't seem to take into account anything except the number of copies a gene produces. (How strange!)

But since the maker of intelligence is thus obsessed, why not create intelligent agents - you can't call them humans - who would likewise care purely about inclusive genetic fitness? Such agents would have sex only as a means of reproduction, and wouldn't bother with sex that involved birth control. They could eat food out of an explicitly reasoned belief that food was necessary to reproduce, not because they liked the taste, and so they wouldn't eat candy if it became detrimental to survival or reproduction. Post-menopausal women would babysit grandchildren until they became sick enough to be a net drain on resources, and would then commit suicide.

It seems like such an obvious design improvement - from the Evolution Fairy's perspective.

Now it's clear, as was discussed yesterday, that it's hard to build a powerful enough consequentialist. Natural selection sort-of reasons consequentially, but only by depending on the actual consequences. Human evolutionary theorists have to do really high-falutin' abstract reasoning in order to imagine the links between adaptations and reproductive success.

But human brains clearly can imagine these links in protein. So when the Evolution Fairy made humans, why did It bother with any motivation except inclusive genetic fitness?

It's been less than two centuries since a protein brain first represented the concept of natural selection. The modern notion of "inclusive genetic fitness" is even more subtle, a highly abstract concept. What matters is not the number of shared genes. Chimpanzees share 95% of your genes. What matters is shared genetic variance, within a reproducing population - your sister is one-half related to you, because any variations in your genome, within the human species, are 50% likely to be shared by your sister.

Only in the last century - arguably only in the last fifty years - have evolutionary biologists really begun to understand the full range of causes of reproductive success, things like reciprocal altruism and costly signaling. Without all this highly detailed knowledge, an intelligent agent that set out to "maximize inclusive fitness" would fall flat on its face.

So why not preprogram protein brains with the knowledge? Why wasn't a concept of "inclusive genetic fitness" programmed into us, along with a library of explicit strategies? Then you could dispense with all the reinforcers. The organism would be born knowing that, with high probability, fatty foods would lead to fitness. If the organism later learned that this was no longer the case, it would stop eating fatty foods. You could refactor the whole system. And it wouldn't invent condoms or cookies.

This looks like it should be quite possible in principle. I occasionally run into people who don't quite understand consequentialism, who say, "But if the organism doesn't have a separate drive to eat, it will starve, and so fail to reproduce." So long as the organism knows this very fact, and has a utility function that values reproduction, it will automatically eat. In fact, this is exactly the consequentialist reasoning that natural selection itself used to build automatic eaters.

What about curiosity? Wouldn't a consequentialist only be curious when it saw some specific reason to be curious? And wouldn't this cause it to miss out on lots of important knowledge that came with no specific reason for investigation attached? Again, a consequentialist will investigate given only the knowledge of this very same fact. If you consider the curiosity drive of a human - which is not undiscriminating, but responds to particular features of problems - then this complex adaptation is purely the result of consequentialist reasoning by DNA, an implicit representation of knowledge: Ancestors who engaged in this kind of inquiry left more descendants.

So in principle, the pure reproductive consequentialist is possible. In principle, all the ancestral history implicitly represented in cognitive adaptations can be converted to explicitly represented knowledge, running on a core consequentialist.

But the blind idiot god isn't that smart. Evolution is not a human programmer who can simultaneously refactor whole code architectures. Evolution is not a human programmer who can sit down and type out instructions at sixty words per minute.

For millions of years before hominid consequentialism, there was reinforcement learning. The reward signals were events that correlated reliably to reproduction. You can't ask a nonhominid brain to foresee that a child eating fatty foods now will live through the winter. So the DNA builds a protein brain that generates a reward signal for eating fatty food. Then it's up to the organism to learn which prey animals are tastiest.

DNA constructs protein brains with reward signals that have a long-distance correlation to reproductive fitness, but a short-distance correlation to organism behavior. You don't have to figure out that eating sugary food in the fall will lead to digesting calories that can be stored fat to help you survive the winter so that you mate in spring to produce offspring in summer. An apple simply tastes good, and your brain just has to plot out how to get more apples off the tree.

And so organisms evolve rewards for eating, and building nests, and scaring off competitors, and helping siblings, and discovering important truths, and forming strong alliances, and arguing persuasively, and of course having sex...

When hominid brains capable of cross-domain consequential reasoning began to show up, they reasoned consequentially about how to get the existing reinforcers. It was a relatively simple hack, vastly simpler than rebuilding an "inclusive fitness maximizer" from scratch. The protein brains plotted how to acquire calories and sex, without any explicit cognitive representation of "inclusive fitness".

A human engineer would have said, "Whoa, I've just invented a consequentialist! Now I can take all my previous hard-won knowledge about which behaviors improve fitness, and declare it explicitly! I can convert all this complicated reinforcement learning machinery into a simple declarative knowledge statement that 'fatty foods and sex usually improve your inclusive fitness'. Consequential reasoning will automatically take care of the rest. Plus, it won't have the obvious failure mode where it invents condoms!"

But then a human engineer wouldn't have built the retina backward, either.

The blind idiot god is not a unitary purpose, but a many-splintered attention. Foxes evolve to catch rabbits, rabbits evolve to evade foxes; there are as many evolutions as species. But within each species, the blind idiot god is purely obsessed with inclusive genetic fitness. No quality is valued, not even survival, except insofar as it increases reproductive fitness. There's no point in an organism with steel skin if it ends up having 1% less reproductive capacity.

Yet when the blind idiot god created protein computers, its monomaniacal focus on inclusive genetic fitness was not faithfully transmitted. Its optimization criterion did not successfully quine. We, the handiwork of evolution, are as alien to evolution as our Maker is alien to us. One pure utility function splintered into a thousand shards of desire.

Why? Above all, because evolution is stupid in an absolute sense. But also because the first protein computers weren't anywhere near as general as the blind idiot god, and could only utilize short-term desires.

In the final analysis, asking why evolution didn't build humans to maximize inclusive genetic fitness, is like asking why evolution didn't hand humans a ribosome and tell them to design their own biochemistry. Because evolution can't refactor code that fast, that's why. But maybe in a billion years of continued natural selection that's exactly what would happen, if intelligence were foolish enough to allow the idiot god continued reign.

The Mote in God's Eye by Niven and Pournelle depicts an intelligent species that stayed biological a little too long, slowly becoming truly enslaved by evolution, gradually turning into true fitness maximizers obsessed with outreproducing each other. But thankfully that's not what happened. Not here on Earth. At least not yet.

So humans love the taste of sugar and fat, and we love our sons and daughters. We seek social status, and sex. We sing and dance and play. We learn for the love of learning.

A thousand delicious tastes, matched to ancient reinforcers that once correlated with reproductive fitness - now sought whether or not they enhance reproduction. Sex with birth control, chocolate, the music of long-dead Bach on a CD.

And when we finally learn about evolution, we think to ourselves: "Obsess all day about inclusive genetic fitness? Where's the fun in that?"

The blind idiot god's single monomaniacal goal splintered into a thousand shards of desire. And this is well, I think, though I'm a human who says so. Or else what would we do with the future? What would we do with the billion galaxies in the night sky? Fill them with maximally efficient replicators? Should our descendants deliberately obsess about maximizing their inclusive genetic fitness, regarding all else only as a means to that end?

Being a thousand shards of desire isn't always fun, but at least it's not boring. Somewhere along the line, we evolved tastes for novelty, complexity, elegance, and challenge - tastes that judge the blind idiot god's monomaniacal focus, and find it aesthetically unsatisfying.

And yes, we got those very same tastes from the blind idiot's godshatter. So what?

119

75 comments, sorted by Highlighting new comments since Today at 12:43 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Eliezer, you wrote:

"Or else what would we do with the future? What would we do with the billion galaxies in the night sky? Fill them with maximally efficient replicators? Should our descendants deliberately obsess about maximizing their inclusive genetic fitness, regarding all else only as a means to that end?"

Won't our descendants who do have genes or code that causes them to maximize their genetic fitness come to dominate the billions of galaxies. How can there be any other stable long term equilibrium in a universe in which many lifeforms have the ability to choose their own utility functions?

3PhilGoetz11yGenetic fitness refers to reproduction of individuals. The future will not have a firm concept of individuals. What is relevant is control of resources; this is independent of reproduction. Furthermore, what we think of today as individuality, will correspond to information in the future. Reproduction will correspond to high mutual information. And high mutual information in your algorithms leads to inefficient use of resources. Therefore, evolution, and competition, will at least in this way go against the future correlate of "genetic fitness".

Wow, too big an inferential distance Phil. No idea what you are tallking about here "what we think of today as individuality, will correspond to information in the future."

Would you mind giving a few more details? Curiosity striking...

1Lambda9yI've been lurking for a while, and this is my first post, but: FTFY. Instead of asking for a single detailed story, we should ask for many simple alternative stories, no? Obviously, this doesn't countermand your complaint about inferential distance, which I totally agree with.
-1[anonymous]5yStill waiting for OP to deliver... It's probably just something stupid like he thinks humans will upload on computers and he thinks he knows how future society-analogues will function.
0Timo5yThis /seems/ to contain great insight that I can't comprehend yet. Yes, please, how do I learn to see what you see?
1aaq5yI'm very wary of this post for being so vague and not linking to an argument, but I'll throw my two cents in. :) I see two ways to interpret this: 1. You could see it as individuals being uploaded to some giant distributed AI - individual human minds coalescing into one big super-intelligence, or being replaced by one; or 2. Having so many individuals that the entire idea of worrying about 1 person, when you have 100 billion people per planet per quadrant or whatever, becomes laughable. The common thread is that "individuality" is slowly being supplanted by "information" - specifically that you, as an individual, only become so because of your unique inflows of information slowly carving out pathways in your mind, like how water randomly carves canyons over millions of years. In a giant AI, all the varying bits that make up one human from another would get crosslinked, in some immense database that would make Jorge Luis Borges blush; meanwhile, in a civilization of huge, huge populations, the value of those varying bits simply goes down, because it becomes increasingly unlikely that you'll actually be unique enough to matter on an individual level. So, the next bottleneck in the spread of civilization becomes resources. This is probably my first comment on this site - feel free to browbeat me if I didn't get my point across well enough.

For everyone who hasn't read A Fire Upon The Deep (Vinge): Godshatter is the term he uses for a superintelligence ramming data and thought patterns into a human brain.

And "Thou art God" comes from Stranger in a Strange Land.

The human brain also has the complicating factor of memes and what might be called "inclusive memetic fitness." If you hypothesize that human behavior is influenced by two different sets of selfish replicators, we could certainly have an equilibrium in which natural selection doesn't produce behavior that maximizes the fitness of only one of them. (Incidentally, does it seem to anyone else that humans are "designed" to have unwanted pregnancies?)

(Also, "The Mote In God's Eye" isn't necessarily the best example. The Moties aren... (read more)

2diegocaleiro10yUnwanted pregnancies and 'Unwanted pregnancies', if one cannot tell the difference, maybe it is because it is starting to disappear. I mean, theoretically we should tend more and more towards "oops, I forgot to take my pill today" and "Oh, don't worry, just this one time without a condom" About the equelibrium between two sets of replicators. Awesome as it looks, it doesn't seem feasible from a game theoretical point of view. We are not the product of two replicators, we are the product of two KINDS of replicators. Each replicator, gene or meme, is fighting his own fight, and will not necessarily coalesce only with his kind. They are not tribes fighting one another, I suggest this is an atypical occurence of Mind Projection Fallacy.

Why is Eliezer so obsessed with the "high-falutin'" expression?

"Obsess all day about inclusive genetic fitness? Where's the fun in that?" Might not our descendants evolve to consider it fun?

I agree with James Miller on the unstable equilibrium. I figure I'll be dead by then though.

I don't find it particular comforting that we are made of many small shards of desire, rather than a single unified desire. We like the fact that the universe is at least locally dominated by creatures who like what we like. This will always be true, that the majority will see a world of creatures mostly like themselves.

I don't find it particular comforting that we are made of many small shards of desire, rather than a single unified desire.

It seems like this is a large part of what makes us eudaemonic agents, in Bostrom's terminology. However, the most admired people are frequently those who display more of a deep commitment than usual to one or a few passions.

"Thou Art Godshatter"! Finally, a name for my Christian/Prog/Electronica combo!

If we could barely arrange to have enough sex to cause 2 pregnancies per lifetime, then we would have a revulsion of condoms, oral sex, etc.

If that were true in the ancestral environment, and we had access to contraception in the AE, yes. I doubt a person who now found themselves in this situation would develop this revulsion.

I've read about africans who specificly get hungry for meat, who identify the specific feeling of protein deficiency.

[anecdote] Is this surprising? I've always been able to tell whether or not I need proteins/carbohydrates/fat (usuall... (read more)

0Will_Newsome10yTAWME, but I'm not sure if it is a consciously learned introspective behavior or something that I just picked up or developed without effort. FWIW I've only really noticed and acted on it for the last year or two.
1MaxNanasy9yWhat does "TAWME" mean?
2RichardKennaway9y"This Agrees With My Experience".

Eliezer: poetic and informative. I like it.

@Elizier, you are slowly changing your point of view and are on a path to rethink old thoughts. Save yourself some time and go read the Principia Cybernetica Web. Only after that you will be able to tread on new ground.

@Nick Tarleton, yes - avoiding a dystopia of non-eudomatic agents is a challenge.

As a chicken is a way for an egg to create another egg, I would like to 'tell my genes to jump in a lake', as Steven Pinker puts it, but considering so many of my preferences are in sync with my genes, I have the feeling they are very good at getting me to rationalize their preferences. I don't think there's intrinsic meaning in anything, but when I see connections, or patterns, in music or jokes or anything, that I haven't noticed before, I find that meaningful, pleasurable in a way my genes can't understand. But my love for my kids, and the meaning it gives me, clearly the gene gremlins are at work

You know, you're getting repetitive. What does this post add to all the other related posts? "Evolutions" are stupid and slow. Okay. But I would guess that many people here want to know your thoughts about AI. I do.

@Tiiba, my paper on friendly AI theory should provide an answer to your question.

"I doubt a person who now found themselves in this situation would develop this revulsion."

By about the second generation a lot would. They would mostly be descended from people who hadn't used them. There is a minority that has a revulsion for condoms now. The idea of giving up practically your only change to have children, deliberately, would start seeming strange when everybody in the world had parents who hadn't done it. Cultures change faster when that happens.

"@Tiiba, my paper on friendly AI theory should provide an answer to your question."

I don't see any connection between my question and your answer. At least one of us is confused.

I base friendliness (universally) on the mechanism of natural selection and claim in short "that is good what increases fitness". You can find more on my blog at http://jame5.com

You don't understand. I'm asking ELIEZER what he is thinking. His homepage says that he has some fresh ideas about AI that are not yet published, yet he continues to write about evolution, rehashing the same idea every day. That is what I said. I don't even know what question you're answering.

"Being a thousand shards of desire isn't always fun, but at least it's not boring."

I like that. I have a feeling Lord Gautama would have liked it too.

I will venture to say that Eliezer's habit (this isn't the first instance) of teasing out the same subject again and again from slightly different angles is highly illuminating for me, at least. (And, I suspect, for him as well... though that's conjecture).

I'm a bit slower than your average Overcoming Bias lurker, it would seem from the level of discourse here. Sometimes I think I barely grasp what ... (read more)

@Tiiba, trust me - I am quite certain that I do, but this is not the right forum - PM me if you want to continue off this blog.

@Elizier, you are slowly changing your point of view and are on a path to rethink old thoughts. I didn't notice any evidence of that. He said that he had greatly changed his view in the past, but that was before he started blogging here. What have you seen since then that makes you think that?

@TGGP: This forum really is not the right place to get into details. It would not be fair towards Eliezer and that I posted something at all is an embarrassing revelation in regards to my intellectual vanity. Mea culpa.

Consider the Laestadians (look them up in Wikipedia if you haven't heard of them). They tend to have lots of children; one TV program some years ago mentioned that families with 10 children are common among them.

Unless a lot of those children abandon (or at least modify) their parents' faith, the future belongs to them and similar groups.

Religion can be a powerful fitness maximizer.

@ Tiiba # 1: Without wishing to second-guess Eliezer, I'd suggest that his prolonged examination of the buggy, ad-hoc character of human intelligence may be intended to preface a discussion AI, its goals and methods. After all, the contrast with human intelliegence could be illuminating.

That missing word: "of".

The blind idiot god's single monomaniacal goal splintered into a thousand shards of desire.

This would explain why our formalised moral systems are either hideously complicated, or fail to capture important parts of our morality... We just have far more urges wants and needs than we realise.

As many comments have suggested, now that evolution has produced creatures that can consciously seek goals, and also has instilled in some of these creatures, to some extent, the goal of bearing and raising children, all that evolution needs to do is to reinforce this desire, and in time it will manage to produce a conscious fitness maximizer.

Abandoning biology is not a way to avoid this result, since biology is not the problem, but reproduction and its historical consequences. Leaving behind biology could even speed up the process dramatically.

Maybe the a... (read more)

Just as an aside, fitness maximizers usually have to accept a finite population size in a finite biome with a finite carrying capacity. There's the possible goal of expanding into the galaxy and neighboring galaxies, but in the short run we have a finite carrying capacity.

And a fitness maximizer that is too successful has to accept it needs to preserve a lot of diversity in its gene pool or else face problems that would essentially reduce carrying capacity.

A conscious fitness maximizer at some point must realise that it survives by maintaining its numbers in a diverse population, rather than maximizing the frequency of its genes.

@ Unknown: Well, one reason why our point of view is more valid than their's is that we exist and they don't.

In addition, it is probably worth stressing that inclusive fitness is not, strictly speaking, the goal of anything at all. Goals only make sense relative to intentions, values and so forth - the usual accoutrements of mentality. These are all things that we humans (and perhaps some other creatures) possess, but which evolution, and our genes, do not. No minds, you see. Despite appearances.

This said, there might be something to be said for engineerin... (read more)

@Stefan: I enjoyed your book and was fascinated by your FAI perspective, but your comments here could be read as overly self-promoting, which would be counterproductive. An evil, paranoid maniac might even imagine you write comments to maximize how many links to your blog you can cram onto a page! Maybe limiting the links to yourself might curb such insanity in your audience.

@Recovering irrationalist, good points, thank you - I just wanted to save time and space by linking to relevant stuff on my blog without repeating myself over and over. My apologies for overdoing it. I guess I feel like talking to a wall or being deliberately ignored due to the lack of feedback. I shall curb my enthusiasm and let things take its course. You know where to find me.

[anecdote] Is this surprising? I've always been able to tell whether or not I need proteins/carbohydrates/fat (usually acting accordingly)....

sorry, guys, this wisdom-of-the-body stuff hasn't held up that well. i've given the link below for a lengthy but thorough account of studies that were done on rats, for the two or fewer people here who might be interested. while there is some evidence for behavioral changes based on mineral deficiency, it's extremely complicated and the changes in the animal's behavior are not that "accurate" (in the sense ... (read more)

This would explain why our formalised moral systems are either hideously complicated, or fail to capture important parts of our morality... We just have far more urges wants and needs than we realise.

Congratulations to Stuart Armstrong on nailing my hidden subtext.

(Albeit even the hideously complicated moral systems still don't capture a fraction of our morality.)

@Tiiba: You seem to think I can just blurt out my AI ideas. I've tried that. It doesn't work.

Having watched other AIfolk "explaining" their ideas, I know very well how to convince som... (read more)

Brendon, you can't expect a learning system to quickly get an exact solution to a problem in N simultaneous equations. But when improvements result in a sense of well-being, they might tend to gradually zero in on solutions. So for nutrition you need sufficient energy and your body might have pre-programmed goals for repair and growth, and whatever helps meet those targets could provide that sense of well-being that announces something worked.

Simpler than having thousands of individual goals programmed in.

"Being a thousand shards of desire isn't always fun, but at least it's not boring."

I like that. I have a feeling Lord Gautama would have liked it too.

I alway thought the exact opposite, that Lord Gautama had a profound experience that made him relatively indifferent to the thousand shards. Specifically, a full-blown ecstatic or mystical experience is a million of times more pleasurable than any other experience the mystic has had or will have, which I always thought would make one less attached to ordinary pleasures and ordinary reinforcers. On... (read more)

J Thomas : "So for nutrition you need sufficient energy and your body might have pre-programmed goals for repair and growth, and whatever helps meet those targets could provide that sense of well-being that announces something worked."

this sort of system might work for thirst or even carbs and protein but would be pretty bad at things like getting you to eat balanced amounts of vitamins and minerals. for instance, your diet could be vitamin b12 poor for months or maybe longer before you would feel the pinch (your body stores the vitamin pretty we... (read more)

Some new info re: evolution you might want to consider before taking the gene view of evolution to its logical conclusions:

http://www.springerlink.com/content/qh67113u60887314/ "Although we agree that evolutionary theory is not undergoing a Kuhnian revolution, the incorporation of new data and ideas about hereditary variation, and about the role of development in generating it, is leading to a version of Darwinism that is very different from the gene-centred one that dominated evolutionary thinking in the second half of the twentieth century."

htt... (read more)

0themusicgod17yIs not your second link dealt with by http://lesswrong.com/lw/iv/the_futility_of_emergence/ [http://lesswrong.com/lw/iv/the_futility_of_emergence/] or am I misreading one of the two? It seems to leave the main causal mechanism abstract enough to prove anything.

That still doesn't explain why Eliezer has been using the expression "high-falutin'" so much. Is it from some recently read book, perhaps?

Brendon, I find your reasoning plausible. I don't know how true it is. I don't want to give myself pernicious anemia to test it, so I'll settle for saying it looks plausible.

If you have a vitamin deficiency, and you get a dose of the vitamin that makes you somewhat less deficient, will you feel better within a few hours? If so then it might be reinforced. On the other hand, one single experience of nerve poisoning a few hours after eating a particular new food can be enough to establish a lifelong distaste for that food.

Specifically, a full-blown ecstatic or mystical experience is a million of times more pleasurable than any other experience the mystic has had or will have

This seems unlikely - it's far more probable that mystical experiences are highly satisfying rather than so intensely pleasurable.

Richard:

I suppose this counts as threadjacking, but this thread seems about played out, so I'll respond to your response to my off-topic aside.

I'm interested in what you say. I don't think it's necessarily off base. But my little cheeky comment was in reference to the Buddhist concept of anatta, or non-self. That is, Eliezer's insistence that there is no purposeful unifying force behind what we experience as "our" desires reminded me of an analogous teaching of the Buddha. Evolution can be seen as a unifying force, I suppose, since it is the comm... (read more)

Nit: surely you mean "220 BC," not "2200 BC".

I will take issue with your positing that the teachings on the end of suffering were added by later theocrats or rulers who wanted to broaden its appeal for the masses."

I stand corrected. Thank you for your thoughtful reply.

I find myself looking for ways to reconcile the two. Of course, in even admitting that, I'm busting myself! If I have my desired conclusion in mind as I sift through the evidence, I have already forgotten the central teachings of Overcoming Bias!

Hmm. I wonder whethe... (read more)

Humphries and Hollerith, your comments would be too long even if they were on-topic. However you can resubmit the comments to an Open Thread, after which they will be deleted here. Thank you.

If they're too long for this page, I suggest that they're too long for an Open Thread, too. I have copied Humphries' latest and my two comments to my web site and emailed Humphries with a notification of what I did (followed by an offer to delete his words from my site if that is his preference).

Deep Blue has many desires too. It knows that a knight is three times as desirable as a pawn - unless the pawn is well advanced. It knows about the value of the centre, and the importance of quiescence - and so on.

The important point to realise is that these desires all represent imperfections. They are not useful features - to be retained and deliberately implemented in future designs, but rather simple heuristics intended to deal with hardware and software limitations - and that in the future their preservation may well lead to mistakes, errors - and los... (read more)

The Mote in God's Eye by Niven and Pournelle depicts an intelligent species that stayed biological a little too long, slowly becoming truly enslaved by evolution, gradually turning into true fitness maximizers obsessed with outreproducing each other. But thankfully that's not what happened. Not here on Earth. At least not yet.

This is an interesting and important paragraph; and it explains some things about Eliezer's views. It's important enough to justify. But I don't see evidence for the idea that evolution gets more oppressive as time passes. I... (read more)

0Will_Newsome10yThe question is indeed interesting, but the presumed answer is a powerful motivator for whom? Even if human evolution will lead to a super-amazing future of greatness, I doubt that future would be as super-amazing as a correctly implemented FAI; avoiding dystopian evolutionary existential catastrophes has never been listed as main reason for wanting to build a friendly really powerful optimization process by anyone I've talked to. Most don't think humanity will even get that far. But I'm curious as to what your intuitions are regarding the probably counterfactual world where humans continue evolving for a long, long time.
1PhilGoetz10yEliezer has a bias against evolution, and a bias against randomness, as exhibited in his series ending in Worse than Random [http://lesswrong.com/lw/vp/worse_than_random/], which is factually correct in the details, but misleading in the real world, as demonstrated by repeated times when his acolytes have used it to attack probabilistic search, probabilistic models, etc. My take all along has been that something about evolution has caused it to reliably make the world a more complicated, more interesting, and better place; and evolution, with randomness, is the only process that can be trusted to continue this. Any attempt to control and direct the course of change will just lock in the values of the controller. I see E's story about the moties as being one possible source of his bias against evolution, and hence against randomness.
2NancyLebovitz10yMy assumption is that it isn't really possible to take charge of evolution. You might be able to have less undirected biological evolution, but only by having memetically-driven evolution. Things are still going to have random influences.
4Jonii10yExactly. This should be obviously what we need to do. ..... Evolution is blindly optimizing for those that produce more offspring. Eventually, those specifically aiming for this would do this more optimally than those who didn't. Meaning that eventually only those whose main goal is to mate would dominate. Evolution marches on. Why this has not happened before is related to the fact that there has not been human level, scheming animals on this planet earlier. Animals that can't plan years ahead would benefit very little from having an urge towards fitness maximizing. Adaptions to be executed are what needs to be optimized and what matters vastly more on that level.
-3Nisan10y?

I'm mostly in agreement with this, but feel I must point out that from the perspective of social primate evolution the "sex only when it will result in offspring" paradigm is a perversion invented (or at least reinvented) by modern humans. Sex is primarily a bonding mechanism, as evidenced by the fact that sexual desire is mediated as much by social circumstances as by other considerations. Of course, social standing is ultimately directed at improving genetic fitness, but sex has been repurposed by the primate social system so that, essentially,... (read more)

I agree with the thrust here, but it does seem that you're conflating two different distinctions.

More specifically: you contrast explicit cognitive representations with implicit genetic representations (1), and it's not always clear when you are talking about the distinction between implicit and explicit representations, and when you are talking about the difference between cognitive and genetic ones.

And it seems to matter: if I ask why my genetic representations aren't recapitulated as cognitive ones, the kind of answer you give here is a fine one, but if... (read more)

This is possibly the best creation myth I've ever read. Possibly because unlike other creation myths, this one is actually true.

You've found amazing poetry in this grand cycle of gene warfare. But now I must wonder: How self-contained are all these desires? Will we evolve some of them to extinction? It is very hard, and somewhat disconcerting, to think of what today is human as only a passing phase on an endless continuum. Yet to assume humanity would always remain as it is seems both unrealistic, and unsatisfying - we want to see growth and novelty. So I guess I hope we will become more complex, more interesting... Rather than get narrowed toward a less fragmented sense of purpose.

It just seems the evolution has failed to build a Friendly (to the evolution) AI.

[-][anonymous]6y 0

Why not become a pure reproductive consequentialist?

Reading these posts I notice a preference for altruism, utilitarianism and rejecting some of the intuitions that natural selection gave us. Moreover, almost everyone working on evolutionary psychology takes a lot of effort to avoid the naturalistic fallacy: Not confusing what is with what ought to be (see Richard Dawkins - "The Selfish Gene" or Steven Pinker - "The Blank Slate").

Still I am wondering what is so "good" about altruism? Knowing that our preference for altruism ... (read more)