PhilGoetz

PhilGoetz's Comments

Group selection update

That's technically true, but it doesn't help a lot. You're assuming one starts with fixation to non-SC in a species. But how does one get to that point of fixation, starting from fixation of SC, which is more advantageous to the individual? That's the problem.

Group selection update

It's not that I no longer endorse it; it's that I replied to a deleted comment instead of to the identical not-deleted comment.

Group selection update
Group selection, as I've heard it explained before, is the idea that genes spread because their effects are for the good of the species. The whole point of evolution is that genes do well because of what they do for the survival of the gene. The effect isn't on the group, or on the individual, the species, or any other unit other than the unit that gets copied and inherited.

Group selection is group selection: selection of groups. That means the phenotype is group behavior, and the effect of selection is spread equally among members of the group. If the effect is death, this eliminates an entire group at once--and the nearer a selfish gene approaches fixation, the more likely it is to trigger a group extinction. Consider what would happen if you ran Axelrod's experiments with group selection implemented, so that groups went extinct if total payoff in the group below some threshold.

The key point is nonlinearity. If the group fitness function is a nonlinear function of the prevalence of a gene, then it dramatically changes fixation and extinction rates.

Well, maybe. If the plant has a typical set of recessive genes in its genome, self-fertilisation is a disaster. A few generations down the line, the self-fertilising plant will have plenty of genetic problems arising from recessive gene problems, and will probably die out. This means that self-fertilisation is bad - a gene for self-fertilisation will only prosper in those cases where it's not fertilising itself. It will do worse.

No. Self-fertilisation doesn't prevent cross-fertilisation. The self-fertilizer has just as many offspring from cross-fertilization as the self-sterile plant, but it has in addition clones of itself. Many of these clones may die, but if just one of them survives, it's still a gain.

Reply

Group selection update

You're assuming that the benefits of an adaptation can only be linear in the fraction of group members with that adaptation. If the benefits are nonlinear, then they can't be modeled by individual selection, or by kin selection, or by the Haystack model, or by the Harpending & Rogers model, in all of which the total group benefit is a linear sum of the individual benefits.

For instance, the benefits of the Greek phalanx are tremendous if 100% of Greek soldiers will hold the line, but negligible if only 99% of them do. We can guess--though I don't know if it's been verified--that slime mold aggregative reproduction can be maintained against invasion only because a slime mold aggregation in which 100% of the single-cell organisms play "fairly" in deciding which of them get to produce germ cells survives, while a slime mold aggregation in which just one cell's genome insisted on becoming the germ cell would die off in 2 generations. I think individual selection would predict the population would be taken over by that anti-social behavior.

Group selection update
Group selection, as I've heard it explained before, is the idea that genes spread because their effects are for the good of the species. The whole point of evolution is that genes do well because of what they do for the survival of the gene. The effect isn't on the group, or on the individual, the species, or any other unit other than the unit that gets copied and inherited.

Group selection is group selection: selection of groups. That means the phenotype is group behavior, and the effect of selection is spread equally among members of the group. If the effect is death, this eliminates an entire group at once--and the nearer a selfish gene approaches fixation, the more likely it is to trigger a group extinction. Consider what would happen if you ran Axelrod's experiments with group selection implemented, so that groups went extinct if total payoff in the group below some threshold.

The key point is nonlinearity. If the group fitness function is a nonlinear function of the prevalence of a gene, then it dramatically changes fixation and extinction rates.

Well, maybe. If the plant has a typical set of recessive genes in its genome, self-fertilisation is a disaster. A few generations down the line, the self-fertilising plant will have plenty of genetic problems arising from recessive gene problems, and will probably die out. This means that self-fertilisation is bad - a gene for self-fertilisation will only prosper in those cases where it's not fertilising itself. It will do worse.

No. Self-fertilisation doesn't prevent cross-fertilisation. The self-fertilizer has just as many offspring from cross-fertilization as the self-sterile plant, but it has in addition clones of itself. Many of these clones may die, but if just one of them survives, it's still a gain.

[This comment is no longer endorsed by its author]Reply
How SIAI could publish in mainstream cognitive science journals

Thanks!

I have great difficulty finding any philosophy published after 1960 other than post-modern philosophy, probably because my starting point is literary theory, which is completely confined to--the words "dominated" and even "controlled" are too weak--Marxian post-modern identity politics, which views literature only as a political barometer and tool.

Pascal's Mugging: Tiny Probabilities of Vast Utilities

I think you're assuming that to give in to the mugging is the wrong answer in a one-shot game for a being that values all humans in existence equally, because it feels wrong to you, a being with a moral compass evolved in iterated multi-generational games.

Consider these possibilities, any one of which would create challenges for your reasoning:

1. Giving in is the right answer in a one-shot game, but the wrong answer in an iterated game. If you give in to the mugging, the outsider will keep mugging you and other rationalists until you're all broke, leaving the universe's future in the hands of "Marxians" and post-modernists.

2. Giving in is the right answer for a rational AI God, but evolved beings (under the Darwinian definition of "evolved") can't value all member of their species equally. They must value kin more than strangers. You would need a theory to explain why any being that evolved due to resource competition wouldn't consider killing a large number of very distantly-related members of its species to be a good thing.

3. You should interpret the conflict between your intuition, and your desire for a rational God, not as showing that you're reasoning badly because you're evolved, but that you're reasoning badly by desiring a rational God bound by a static utility function. This is complicated, so I'm gonna need more than one paragraph:

Intuitively, my argument boils down to applying the logic behind free markets, freedom of speech, and especially evolution, to the question of how to construct God's utility function. This will be vague, but I think you can fill in the blanks.

Free-market economic theory developed only after millenia during which everyone believed that top-down control was the best way of allocating resources. Freedom of speech developed only after millenia during which everyone believed that it was rational for everyone to try to suppress any speech they disagreed with. Political liberalism developed only after millenia during which everybody believed that the best way to reform society was to figure out what the best society would be like, then force that on everyone. Evolution was conceived of--well, originally about 2500 years ago, probably by Democritus, but it became popular only after millenia during which everyone believed that life could be created only by design.

All of these developments came from empiricists. Empiricism is one of the two opposing philosophical traditions of Western thought. It originated, as far as we know, with Democritus (about whom Plato reportedly said that he wished all his works to be burned--which they eventually were). It went through the Skeptics, the Stoics, Lucretius, nominalism, the use of numeric measurements (re-introduced to the West circa 1300), the Renaissance and Enlightenment, and eventually (with the addition of evolution, probability, statistics, and operationalized terms) created modern science.

A key principle of empiricism, on which John Stuart Mill explicitly based his defense of free speech, is that we can never be certain. If you read about the skeptics and stoics today, you'll read that they "believed nothing", but that was because, to their opponents, "believe" meant "know something with 100% certainty".

(The most-famous skeptic, Sextus Empiricus, was called "Empiricus" because he was of the empirical school of medicine, which taught learning from experience. Its opponent was the rational school of medicine, which used logic to interpret the dictums of the ancient authorities.)

The opposing philosophical tradition, founded by Plato--is rationalism. "Rational" does not mean "good thinking". It has a very specific meaning, and it is not a good way of thinking. It means reasoning about the physical world the same way Euclid constructed geometric proofs. No measurements, no irrational numbers, no observation of the world, no operationalized nominalist definitions, no calculus or differential equations, no testing of hypotheses--just armchair a priori logic about universal categories, based on a set of unquestionable axioms, done in your favorite human language. Rationalism is the opposite of science, which is empirical. The pretense that "rational" means "right reasoning" is the greatest lie foisted on humanity by philosophers.

Dualist rationalism is inherently religious, as it relies on some concept of "spirit", such as Plato's Forms, Augustine's God, Hegel's World Spirit, or an almighty programmer converting sense data into LISP symbols, to connect the inexact, ambiguous, changeable things of this world to the precise, unambiguous, unchanging, and usually unquantified terms in its logic.

(Monist rationalists, like Buddha, Parmenides, and post-modernists, believe sense data can't be divided unambiguously into categories, and thus we may not use categories. Modern empiricists categorize sense data using statistics.)

Rationalists support strict, rigid, top-down planning and control. This includes their opposition to free markets, free speech, gradual reform, and optimization and evolution in general. This is because rationalists believe they can prove things about the real world, and hence their conclusions are reliable, and they don't need to mess around with slow, gradual improvements or with testing. (Of course each rationalist believes that every other rationalist was wrong, and should probably be burned at the stake.)

They oppose all randomness and disorder, because it makes strict top-down control difficult, and threatens to introduce change, which can only be bad once you've found the truth.

They have to classify every physical thing in the world into a discrete, structureless, atomic category, for use in their logic. That has led inevitably to theories which require all humans to ultimately have, at reflective equilibrium, the same values--as Plato, Augustine, Marx, and CEV all do.

You have, I think, picked up some of these bad inclinations from rationalism. When you say you want to find the "right" set of values (via CEV) and encode them into an AI God, that's exactly like the rationalists who spent their lives trying to find the "right" way to live, and then suppress all other thoughts and enforce that "right way" on everyone, for all time. Whereas an empiricist would never claim to have found final truth, and would always leave room for new understandings and new developments.

Your objection to randomness is also typically rationalist. Randomness enables you to sample without bias. A rationalist believes he can achieve complete lack of bias; an empiricist believes that neither complete lack of bias nor complete randomness can be achieved, but that for a given amount of effort, you might achieve lower bias by working on your random number generator and using it to sample, than by hacking away at your biases.

So I don't think we should build an FAI God who has a static set of values. We should build, if anything, an AI referee, who tries only to keep conditions in the universe that will enable evolution to keep on producing behaviors, concepts, and creatures of greater and greater complexity. Randomness must not be eliminated, for without randomness we can have no true exploration, and must be ruled forever by the beliefs and biases of the past.

Rescuing the Extropy Magazine archives

I scanned in Extropy 1, 3, 4, 5, 7, 16, and 17, which leaves only #2 missing. How can I send these to you? Contact me at [my user name] at gmail.

Why Bayesians should two-box in a one-shot

I just now read that one post. It isn't clear how you think it's relevant. I'm guessing you think that it implies that positing free will is invalid.

You don't have to believe in free will to incorporate it into a model of how humans act. We're all nominalists here; we don't believe that the concepts in our theories actually exist somewhere in Form-space.

When someone asks the question, "Should you one-box?", they're using a model which uses the concept of free will. You can't object to that by saying "You don't really have free will." You can object that it is the wrong model to use for this problem, but then you have to spell out why, and what model you want to use instead, and what question you actually want to ask, since it can't be that one.

People in the LW community don't usually do that. I see sloppy statements claiming that humans "should" one-box, based on a presumption that they have no free will. That's making a claim within a paradigm while rejecting the paradigm. It makes no sense.

Consider what Eliezer says about coin flips:

We've previously discussed how probability is in the mind. If you are uncertain about whether a classical coin has landed heads or tails, that is a fact about your state of mind, not a property of the coin. The coin itself is either heads or tails. But people forget this, and think that coin.probability == 0.5, which is the Mind Projection Fallacy: treating properties of the mind as if they were properties of the external world.

The mind projection fallacy is treating the word "probability" not in a nominalist way, but in a philosophical realist way, as if they were things existing in the world. Probabilities are subjective. You don't project them onto the external world. That doesn't make "coin.probability == 0.5" a "false" statement. It correctly specifies the distribution of possibilities given the information available within the mind making the probability assessment. I think that is what Eliezer is trying to say there.

"Free will" is a useful theoretical construct in a similar way. It may not be a thing in the world, but it is a model for talking about how we make decisions. We can only model our own brains; you can't fully simulate your own brain within your own brain; you can't demand that we use the territory as our map.

37 Ways That Words Can Be Wrong

Yep, nice list. One I didn't see: Defining a word in a way that is less useful (that conveys less information) and rejecting a definition that is more useful (that conveys more information). Always choose the definition that conveys more information; eliminate words that convey zero information. It's common for people to define words that convey zero information. But if everything has the Buddha nature, nothing empirical can be said about what it means and it conveys no information.

Along similar lines, always define words so that no other word conveys too much mutual information about them. For instance, many people have argued with me that I should use the word "totalitarian" to mean "the fascist nations of the 20th century". Well, we already have a word for that, which is "fascist", so to define "totalitarian" as a synonym makes it a useless word.

The word "fascist" raises the question of when to use extensional vs. intensional definitions. It's conventionally defined extensionally, to mean the Axis powers in World War 2. This is not a useful definition, as we already have a label for that. Worse, people define it extensionally but pretend they've defined it intensionally. They call people today "fascist", conveying connotations in a way that can't be easily disputed, because there is no intensional definition to evaluate the claim.

Sometimes you want to switch back and forth between extensional and intensional definitions. In art history, we have a term for each period or "movement", like "neo-classical" and "Romantic". The exemplars of the category are defined both intensionally and extensionally, as those artworks having certain properties and produced in certain geographic locations during a certain time period. It is appropriate to use the intensional definition alone if describing a contemporary work of art (you can call it "Romantic" if it looks Romantic), but inappropriate to use examples that fit the intension but not the extension as exemplars, or to deduce things about the category from them. This keeps the categories stable.

A little ways back I talked about defining the phrase "Buddha nature". Phrases also have definitions--words are not atoms of meaning. Analyzing a phrase as if our theories of grammar worked, ignoring knowledge about idioms, is an error rationalists sometimes commit.

Pretending words don't have connotations is another error rationalists commit regularly--often in sneaky ways, deliberately using the connotations, while pretending they're being objective. Marxist literary criticism, for instance, loads a lot into the word "bourgeois".

Another category missing here is gostoks and doshes. This is when a word's connotations and tribal affiliation-signalling displace its semantic content entirely, and no one notices it has no meaning. Extremely common in Marxism and in "theory"; "capitalism" and "bourgeois" being the most-common examples. "Bourgeoisie" originally meant people like Rockefeller and the Borges, but as soon as artists began using the word, they used it to mean "people who don't like my scribbles," and now it has no meaning at all, but demonic connotations. "Capitalism" has no meaning that can single out post-feudal societies in the way Marxists pretend it does; any definition of it that I've seen includes things that Marxists don't want it to, like the Soviet Union, absolute monarchies, or even hunter-gatherer tribes. It should be called simply "free markets", which is what they really object to and much more accurate at identifying the economic systems that they oppose, but they don't want to admit that the essence of their ideology is opposition to freedom.

Avoid words with connotations that you haven't justified. Don't say "cheap" if you mean "inexpensive" or "shoddy". Especially avoid words which have a synonym with the opposite connotation: "frugal" and "miserly". Be aware of your etymological payloads: "awesome" and "awful" (full of awe), "incredible" (not credible), "wonderful" (thought-provoking).

Another category is when 2 subcultures have different sets of definitions for the same words, and don't realize it. For instance, in the humanities, "rational" literally means ratio-based reasoning, which rejects the use of real numbers, continuous equations, empirical measurements, or continuous changes over time. This is the basis of the Romantic/Modernist hatred of "science" (by which they mean Aristotelian rationality), and of many post-modern arguments that rationality doesn't work. Many people in the humanities are genuinely unaware that science is different than it was 2400 years ago, and most were 100% ignorant of science until perhaps the mid-20th century. A "classical education" excludes all empiricism.

Another problem is meaning drift. When you use writings from different centuries, you need to be aware of how the meanings of words and phrases have changed over time. For instance, the official academic line nowadays is that alchemy and astrology are legitimate sciences; this is justified in part by using the word "science" as if it meant the same as the Latin "scientia".

A problem in translation is decollapsing definitions. Medieval Latin conflated some important concepts because their neo-Platonist metaphysics said that all good things sort of went together. So for instance they had a single word, "pulchrum", which meant "beautiful", "sexy", "appropriate to its purpose", "good", and "noble". Translators will translate that into English based on the context, but that's not conveying the original mindset. This comes up most frequently when ancient writers made puns, like Plato's puns in the Crito, or "Jesus'" (Greek) puns in the opening chapters of John, which are destroyed in translation, leaving the reader with a false impression of the speaker's intent.

I disagree that saying "X is Y by definition" Is usually wrong, but I should probably leave my comment on that post instead of here.

Load More