PhilGoetz

PhilGoetz's Comments

Stupidity as a mental illness

It's only offensive if you still think of mental illness as shameful.

Stupidity as a mental illness

Me: We could be more successful at increasing general human intelligence if we looked at low intelligence as something that people didn't have to be ashamed of, and that could be remedied, much as how we now try to look at depression and other mental illness as illness--a condition which can often be treated and which people don't need to be ashamed of.

You: YOU MONSTER! You want to call stupidity "mental illness", and mental illness is a bad and shameful thing!

Group selection update

That's technically true, but it doesn't help a lot. You're assuming one starts with fixation to non-SC in a species. But how does one get to that point of fixation, starting from fixation of SC, which is more advantageous to the individual? That's the problem.

Group selection update

It's not that I no longer endorse it; it's that I replied to a deleted comment instead of to the identical not-deleted comment.

Group selection update
Group selection, as I've heard it explained before, is the idea that genes spread because their effects are for the good of the species. The whole point of evolution is that genes do well because of what they do for the survival of the gene. The effect isn't on the group, or on the individual, the species, or any other unit other than the unit that gets copied and inherited.

Group selection is group selection: selection of groups. That means the phenotype is group behavior, and the effect of selection is spread equally among members of the group. If the effect is death, this eliminates an entire group at once--and the nearer a selfish gene approaches fixation, the more likely it is to trigger a group extinction. Consider what would happen if you ran Axelrod's experiments with group selection implemented, so that groups went extinct if total payoff in the group below some threshold.

The key point is nonlinearity. If the group fitness function is a nonlinear function of the prevalence of a gene, then it dramatically changes fixation and extinction rates.

Well, maybe. If the plant has a typical set of recessive genes in its genome, self-fertilisation is a disaster. A few generations down the line, the self-fertilising plant will have plenty of genetic problems arising from recessive gene problems, and will probably die out. This means that self-fertilisation is bad - a gene for self-fertilisation will only prosper in those cases where it's not fertilising itself. It will do worse.

No. Self-fertilisation doesn't prevent cross-fertilisation. The self-fertilizer has just as many offspring from cross-fertilization as the self-sterile plant, but it has in addition clones of itself. Many of these clones may die, but if just one of them survives, it's still a gain.

Reply

Group selection update

You're assuming that the benefits of an adaptation can only be linear in the fraction of group members with that adaptation. If the benefits are nonlinear, then they can't be modeled by individual selection, or by kin selection, or by the Haystack model, or by the Harpending & Rogers model, in all of which the total group benefit is a linear sum of the individual benefits.

For instance, the benefits of the Greek phalanx are tremendous if 100% of Greek soldiers will hold the line, but negligible if only 99% of them do. We can guess--though I don't know if it's been verified--that slime mold aggregative reproduction can be maintained against invasion only because a slime mold aggregation in which 100% of the single-cell organisms play "fairly" in deciding which of them get to produce germ cells survives, while a slime mold aggregation in which just one cell's genome insisted on becoming the germ cell would die off in 2 generations. I think individual selection would predict the population would be taken over by that anti-social behavior.

Group selection update
Group selection, as I've heard it explained before, is the idea that genes spread because their effects are for the good of the species. The whole point of evolution is that genes do well because of what they do for the survival of the gene. The effect isn't on the group, or on the individual, the species, or any other unit other than the unit that gets copied and inherited.

Group selection is group selection: selection of groups. That means the phenotype is group behavior, and the effect of selection is spread equally among members of the group. If the effect is death, this eliminates an entire group at once--and the nearer a selfish gene approaches fixation, the more likely it is to trigger a group extinction. Consider what would happen if you ran Axelrod's experiments with group selection implemented, so that groups went extinct if total payoff in the group below some threshold.

The key point is nonlinearity. If the group fitness function is a nonlinear function of the prevalence of a gene, then it dramatically changes fixation and extinction rates.

Well, maybe. If the plant has a typical set of recessive genes in its genome, self-fertilisation is a disaster. A few generations down the line, the self-fertilising plant will have plenty of genetic problems arising from recessive gene problems, and will probably die out. This means that self-fertilisation is bad - a gene for self-fertilisation will only prosper in those cases where it's not fertilising itself. It will do worse.

No. Self-fertilisation doesn't prevent cross-fertilisation. The self-fertilizer has just as many offspring from cross-fertilization as the self-sterile plant, but it has in addition clones of itself. Many of these clones may die, but if just one of them survives, it's still a gain.

[This comment is no longer endorsed by its author]Reply
How SIAI could publish in mainstream cognitive science journals

Thanks!

I have great difficulty finding any philosophy published after 1960 other than post-modern philosophy, probably because my starting point is literary theory, which is completely confined to--the words "dominated" and even "controlled" are too weak--Marxian post-modern identity politics, which views literature only as a political barometer and tool.

Pascal's Mugging: Tiny Probabilities of Vast Utilities

I think you're assuming that to give in to the mugging is the wrong answer in a one-shot game for a being that values all humans in existence equally, because it feels wrong to you, a being with a moral compass evolved in iterated multi-generational games.

Consider these possibilities, any one of which would create challenges for your reasoning:

1. Giving in is the right answer in a one-shot game, but the wrong answer in an iterated game. If you give in to the mugging, the outsider will keep mugging you and other rationalists until you're all broke, leaving the universe's future in the hands of "Marxians" and post-modernists.

2. Giving in is the right answer for a rational AI God, but evolved beings (under the Darwinian definition of "evolved") can't value all member of their species equally. They must value kin more than strangers. You would need a theory to explain why any being that evolved due to resource competition wouldn't consider killing a large number of very distantly-related members of its species to be a good thing.

3. You should interpret the conflict between your intuition, and your desire for a rational God, not as showing that you're reasoning badly because you're evolved, but that you're reasoning badly by desiring a rational God bound by a static utility function. This is complicated, so I'm gonna need more than one paragraph:

Intuitively, my argument boils down to applying the logic behind free markets, freedom of speech, and especially evolution, to the question of how to construct God's utility function. This will be vague, but I think you can fill in the blanks.

Free-market economic theory developed only after millenia during which everyone believed that top-down control was the best way of allocating resources. Freedom of speech developed only after millenia during which everyone believed that it was rational for everyone to try to suppress any speech they disagreed with. Political liberalism developed only after millenia during which everybody believed that the best way to reform society was to figure out what the best society would be like, then force that on everyone. Evolution was conceived of--well, originally about 2500 years ago, probably by Democritus, but it became popular only after millenia during which everyone believed that life could be created only by design.

All of these developments came from empiricists. Empiricism is one of the two opposing philosophical traditions of Western thought. It originated, as far as we know, with Democritus (about whom Plato reportedly said that he wished all his works to be burned--which they eventually were). It went through the Skeptics, the Stoics, Lucretius, nominalism, the use of numeric measurements (re-introduced to the West circa 1300), the Renaissance and Enlightenment, and eventually (with the addition of evolution, probability, statistics, and operationalized terms) created modern science.

A key principle of empiricism, on which John Stuart Mill explicitly based his defense of free speech, is that we can never be certain. If you read about the skeptics and stoics today, you'll read that they "believed nothing", but that was because, to their opponents, "believe" meant "know something with 100% certainty".

(The most-famous skeptic, Sextus Empiricus, was called "Empiricus" because he was of the empirical school of medicine, which taught learning from experience. Its opponent was the rational school of medicine, which used logic to interpret the dictums of the ancient authorities.)

The opposing philosophical tradition, founded by Plato--is rationalism. "Rational" does not mean "good thinking". It has a very specific meaning, and it is not a good way of thinking. It means reasoning about the physical world the same way Euclid constructed geometric proofs. No measurements, no irrational numbers, no observation of the world, no operationalized nominalist definitions, no calculus or differential equations, no testing of hypotheses--just armchair a priori logic about universal categories, based on a set of unquestionable axioms, done in your favorite human language. Rationalism is the opposite of science, which is empirical. The pretense that "rational" means "right reasoning" is the greatest lie foisted on humanity by philosophers.

Dualist rationalism is inherently religious, as it relies on some concept of "spirit", such as Plato's Forms, Augustine's God, Hegel's World Spirit, or an almighty programmer converting sense data into LISP symbols, to connect the inexact, ambiguous, changeable things of this world to the precise, unambiguous, unchanging, and usually unquantified terms in its logic.

(Monist rationalists, like Buddha, Parmenides, and post-modernists, believe sense data can't be divided unambiguously into categories, and thus we may not use categories. Modern empiricists categorize sense data using statistics.)

Rationalists support strict, rigid, top-down planning and control. This includes their opposition to free markets, free speech, gradual reform, and optimization and evolution in general. This is because rationalists believe they can prove things about the real world, and hence their conclusions are reliable, and they don't need to mess around with slow, gradual improvements or with testing. (Of course each rationalist believes that every other rationalist was wrong, and should probably be burned at the stake.)

They oppose all randomness and disorder, because it makes strict top-down control difficult, and threatens to introduce change, which can only be bad once you've found the truth.

They have to classify every physical thing in the world into a discrete, structureless, atomic category, for use in their logic. That has led inevitably to theories which require all humans to ultimately have, at reflective equilibrium, the same values--as Plato, Augustine, Marx, and CEV all do.

You have, I think, picked up some of these bad inclinations from rationalism. When you say you want to find the "right" set of values (via CEV) and encode them into an AI God, that's exactly like the rationalists who spent their lives trying to find the "right" way to live, and then suppress all other thoughts and enforce that "right way" on everyone, for all time. Whereas an empiricist would never claim to have found final truth, and would always leave room for new understandings and new developments.

Your objection to randomness is also typically rationalist. Randomness enables you to sample without bias. A rationalist believes he can achieve complete lack of bias; an empiricist believes that neither complete lack of bias nor complete randomness can be achieved, but that for a given amount of effort, you might achieve lower bias by working on your random number generator and using it to sample, than by hacking away at your biases.

So I don't think we should build an FAI God who has a static set of values. We should build, if anything, an AI referee, who tries only to keep conditions in the universe that will enable evolution to keep on producing behaviors, concepts, and creatures of greater and greater complexity. Randomness must not be eliminated, for without randomness we can have no true exploration, and must be ruled forever by the beliefs and biases of the past.

Rescuing the Extropy Magazine archives

I scanned in Extropy 1, 3, 4, 5, 7, 16, and 17, which leaves only #2 missing. How can I send these to you? Contact me at [my user name] at gmail.

Load More