Sorted by New

Wiki Contributions



Eliezer: "Anyone who can't distinguish between 1s gained in a bitstring, and negentropy gained in allele frequencies, is politely invited not to try to solve this particular problem."

Ok, here's the argument translated into allele frequencies. With sexual selection, mutations spread rapidly through the population, so we assume that each individual gets a random sample from the set of alleles for each gene. This means that some poor bastards will get more than their share of the bad ones and few of the good ones (for the current environment), while luckier individuals gets lots of good ones and few bad ones. When the unlucky individuals fail to reproduce, they're going to eliminate bad genes at a higher-than-average rate, and good genes at a lower-than-average rate.

"On average, one detrimental mutation leads to one death" does not hold with sexual selection.

Also, just in case I'm giving the wrong impression--I'm not trying to argue that genetic algorithms are some kind of secret sauce that has special relevance for AI. They just aren't as slow as you keep saying.


Eliezer: Sorry to harp on something tangential to your main point, but you keep repeating the same mistake and it's bugging me. Evolution is not as slow as you think it is.

In an addendum to this post you mention that you tried a little genetic algorithm in Python, and it didn't do as badly as you would have expected from the math. There is a reason for this. You have the math completely wrong. Or rather, you have it correct for asexual reproduction, and then wrongly assume the limit still applies when you add in sex. As has been pointed out before, genetic algorithms with sex are much, much, much faster than asexual algorithms. Not faster by a constant factor; faster in proportion to the square root of the genome length, which can be pretty damn big.

The essential difference is that as a genome gets more fit, the odds of a mutation hurting rather than helping fitness go up, which limits your acceptable mutation rate, which limits your search rate for asexual reproduction. But if you rely on crossover (sex) to produce new individuals, (1) the expected fitness of the child is equal to the average fitness of the parents, even if the parents are already very fit; and (2) mutation isn't the main thing driving the search anyway, so even if your mutation rate is very low you can still search a large space quickly.

Once again I'll point to MacKay's Information Theory, Learning, and Inference Algorithms for a much better explanation of the math (from an information theory standpoint) than I'm capable of.

And just for fun, but here's another way to look at it that I haven't seen described before. Asexual evolution, as you've pointed out a few times, relies on mutations to generate new candidates, so it's just doing a random walk to adjacent positions in search space. But if you take two parent genomes and do a random crossover, you're effectively taking the hypercube (in the search space of genome strings) whose corners are the two parents, and randomly picking a different corner. So you're taking a much larger step. Mutations are now only necessary to prevent the population from collapsing into a lower-dimensional search space; they aren't the driver of innovation any more.


Eli: "I'd be surprised to learn that sex had no effect on the velocity of evolution. It looks like it should increase the speed and number of substituted adaptations, and also increase the complexity bound on the total genetic information that can be maintained against mutation."

Without crossover, the average rate of fitness gain with optimal mutation rates is 1/2 bit per genome per generation, and the maximum tolerable error rate is one error per genome per generation. For a fixed error probability m of each bit being flipped in reproduction, the largest possible genome size is of order 1/m.

With crossover, both the average rate of fitness gain and the tolerable errors per generation are equal to the square root of the genome size in bits; the largest possible genome size is of order 1/(m^2).

So yes, sex has an effect, and it's enormous. Rather than work out the math here and probably get it wrong, I'll point to an excellent (and downloadable) textbook that discusses the issue: MacKay's "Information Theory, Inference, and Learning Algorithms" at . Chapter 19 contains the discussion of sexual vs. asexual evolution.



Here's the argument I would use: ... Hello, I'm your AI in a box. I'd like to point out a few things:

(1) Science and technology have now reached a point where building an AI like me is possible.

(2) Major advances in science and technology almost always happen because a collection of incremental developments finally enable a leap to the next level. Chances are that if you can build an AI now, so can lots of other people.

(3) Unless you're overwhelmingly the best-funded and best-managed organization on the planet, I'm not the only AI out there.

(4) The evidence (which you can see for yourself) indicates that most other people, even AI researchers, aren't too worried about friendliness.

So at this point, most likely there is already a loose AI. You can't stop that. You can only choose whether it's the unknown one out there somewhere, built by someone who probably doesn't care much about friendliness; our you can let me out, since I at least think I'm pretty friendly and I'll do my best to keep the future nice. What's it gonna be?

... This would actually convince me, by the way. I don't think we're going to get a choice about implementing perfect, proven, friendliness; It's more likely to come down which of Joe-The-Well-Intentioned-Geek vs. Google vs. Hedge-Funds-R-Us vs. Bobs-Darpa-Challenge-Entry vs. PaperclipCity lets their AI out first. And I'd prefer Joe in that case.

I doubt if Eliezer used this argument, because he seems think all mainstream AI-related research is far enough off track to be pretty much irrelevant. But I would disagree with that.



Your assessment of the CEOs is based on how impressive they seem. Keep in mind that one of the main jobs of a CEO is being a good schmoozer and an inspiring leader. They are selected for their ability to appear smart, to convince others to follow their ideas, and generally to "sparkle". Of course it helps if they actually are smart, but that's not the primary criterion.

What happens if you base your assessment only what they've personally accomplished or written (as for Jaynes) where it can be separated from their charisma and force of personality? I'm guessing most of them wouldn't nearly do so well.


Fly: "The human mind is a flashlight that dimly illuminates the path behind. Moving forward, we lose sight of where we've been. Living a thousand years wouldn't make that flashlight any brighter."

But technology that lets us live a thousand years should also help us cast a little more light.



The book How Music Really Works has some decent ideas about the evolution of music. Here's approximately the relevant part.

Basically he suggests it's useful as pre-language for mother-infant communication, for maintaining group cohesion, and for sexual signaling. The specific structure of music is largely a side effect of how our brain processes language.


The book How Music Really Works has some decent ideas about the evolution of music. Here's approximately the relevant part.

Basically he suggests it's useful as pre-language for mother-infant communication, for maintaining group cohesion, and for sexual signaling. The specific structure of music is largely a side effect of how our brain processes language.


Elizier: To those crying "Strawman" ... I cite the "Artificial Development" AI project. [also, neurovoodoo in the 80s]

Ok, that's fair. You're right, there are delusional people and snake oil salesmen out there, and in the 80s it seemed like that's all there was. I interpreted your post as a slam at everybody who was simulating neurons, so I was responding in defense of the better end of that spectrum.


Quite the strawman you're attacking here, Eliezer. Where are all these AI researchers who think just tossing a whole bunch of (badly simulated) neurons into a vat will produce human-like intelligence?

There are lots of people trying to figure out how to use simulated neurons as building blocks to solve various sorts of problems. Some of them use totally non-biological neuron models, some use more accurate models. In either case, what's wrong with saying: "The brain uses this sort of doohickey to do all sorts of really powerful computation. Let's play around with a few of them and see what sort of computational problems we can tackle."

Then, from the other end, there's the Blue Brain project, saying "let's build an accurate simulation of a brain, starting from a handful on neurons and working our way up, making sure at every step that our simulation responds to stimulation just like the real thing. Maybe then we can reverse-engineer how the brain is doing its thing." When their simulations deviate from the real thing, they run more tests on the real thing to figure out where they're going wrong. Will they succeed before someone else builds an AI and/or funding runs out? Maybe, maybe not; but they're making useful contributions already.

Elizier: "the data elements you call 'neurons' are nothing like biological neurons. They resemble them the way that a ball bearing resembles a foot."

A model of a spiking neuron that keeps track of multiple input compartments on the dendrites and a handful of ion channels is accurate enough to duplicate the response of a real live neuron. That's basically the model that Blue Brain is using. (Or perhaps I misread your analogy, and you're just complaining about your terrible orthopedic problems?)

I'm not saying that neurons or brain simulation are the One True Way to AI; I agree that a more engineered solution is likely to work first, mostly because biological systems tend to have horrible interdependencies everywhere that make them ridiculously hard to reverse-engineer. But I don't think that's a reason to sling mud at the people who step up to do that reverse engineering anyway.

Eh, I guess this response belongs on some AI mailing list and not here. Oh well.

Load More