PhilGoetz

Comments

Where do (did?) stable, cooperative institutions come from?

"Cynicism is a self-fulfilling prophecy; believing that an institution is bad makes the people within it stop trying, and the good people stop going there."

I think this is a key observation. Western academia has grown continually more cynical since the advent of Marxism, which assumes an almost absolute cynicism as a point of dogma: all actions are political actions motivated by class, except those of bourgeois Marxists who for mysterious reasons advocate the interests of the proletariat.

This cynicism became even worse with Foucault, who taught people to see everything as nothing but power relations.  Western academics today are such knee-jerk cynics that they can't conceive of loyalty to any organization other than Marxism or the Social Justice movement as being anything but exploitation of the one being loyal.

Pride is the opposite of cynicism, and is one of the key feelings that makes people take brave, altruistic actions.  Yet today we've made pride a luxury of the oppressed.  Only groups perceived as oppressed are allowed to have pride in group memberships.  If you said you were proud of being American, or of being manly, you'd get deplatformed, and possibly fired.

The defamation of pride in mainstream groups is thus destroying our society's ability to create or maintain mainstream institutions.  In my own cynicism, I think someone deliberately intended this.  This defamation began with Marxism, and is now supported by the social justice movement, both of which are Hegelian revolutionary movements which believe that the first step toward making civilization better is to destroy it, or at least destabilize it enough to stage a coup or revolution.  This is the "clean sweep" spoken of so often by revolutionaries since the French Revolution.

Since their primary goal is to destroy civilization, it makes perfect sense that they begin by convincing people that taking pride in any mainstream identity or group membership is evil, as this will be sufficient to destroy all cooperative social institutions, and hence civilization.

The Solomonoff Prior is Malign

"At its core, this is the main argument why the Solomonoff prior is malign: a lot of the programs will contain agents with preferences, these agents will seek to influence the Solomonoff prior, and they will be able to do so effectively."

First, this is irrelevant to most applications of the Solomonoff prior.  If I'm using it to check the randomness of my random number generator, I'm going to be looking at 64-bit strings, and probably very few intelligent-life-producing universe-simulators output just 64 bits, and it's hard to imagine how an alien in a simulated universe would want to bias my RNG anyway.

The S. prior is a general-purpose prior which we can apply to any problem.  The output string has no meaning except in a particular application and representation, so it seems senseless to try to influence the prior for a string when you don't know how that string will be interpreted.

Can you give an instance of an application of the S. prior in which, if everything you wrote were correct, it would matter?

Second, it isn't clear that this is a bug rather than a feature.  Say I'm developing a program to compress photos.  I'd like to be able to ask "what are the odds of seeing this image, ever, in any universe?"  That would probably compress images of plants and animals better than other priors, because in lots of universes life will arise and evolve, and features like radial symmetry, bilateral symmetry, leafs, legs, etc., will arise in many universes.  This biasing of priors by evolution doesn't seem to me different than biasing of priors by intelligent agents; evolution is smarter than any agent we know.  And I'd like to get biasing from intelligent agents, too; then my photo-compressor might compress images of wheels and rectilinear buildings better.

Also in the category of "it's a feature, not a bug" is that, if you want your values to be right, and there's a way of learning the values of agents in many possible universes, you ought to try to figure out what their values are, and update towards them.  This argument implies that you can get that for free by using Solomonoff priors.

(If you don't think your values can be "right", but instead you just believe that your values morally oblige you to want other people to have those values, you're not following your values, you're following your theory about your values, and probably read too much LessWrong for your own good.)

Third, what do you mean by "the output" of a program that simulates a universe? How are we even supposed to notice the infinitesimal fraction of that universe's output which the aliens are influencing to subvert us?  Take your example of Life--is the output a raster scan of the 2D bit array left when the universe goes static?  In that case, agents have little control over the terminal state of their universe (and also, in the case of Life, the string will be either almost entirely zeroes, or almost entirely 1s, and those both already have huge Solomonoff priors).  Or is it the concatenation of all of the states it goes through, from start to finish?  In that case, by the time intelligent agents evolve, their universe will have already produced more bits than our universe can ever read.

Are you imagining that bits are never output unless the accidentally-simulated aliens choose to output a bit?  I can't imagine any way that could happen, at least not if the universe is specified with a short instruction string.

This brings us to the 4th problem:  It makes little sense to me to worry about averaging in outputs from even mere planetary simulations if your computer is just the size of a planet, because it won't even have enough memory to read in a single output string from most such simulations.

5th, you can weigh each program's output proportional to 2^-T, where T is the number of steps it takes the TM to terminate.  You've got to do something like that anyway, because you can't run TMs to completion one after another; you've got to do something like take a large random sample of TMs and iteratively run each one step.  Problem solved.

Maybe I'm misunderstanding something basic, but I feel like we're talking about many angels can dance on the head of a pin.

Perhaps the biggest problem is that you're talking about an entire universe of intelligent agents conspiring to change the "output string" of the TM that they're running in.  This requires them to realize that they're running in a simulation, and that the output string they're trying to influence won't even be looked at until they're all dead and gone.  That doesn't seem to give them much motivation to devote their entire civilization to twiddling bits in their universe's final output in order to shift our priors infinitesimally.  And if it did, the more likely outcome would be an intergalactic war over what string to output.

(I understand your point about them trying to "write themselves into existence, allowing them to effectively "break into" our universe", but as you've already required their TM specification to be very simple, this means the most they can do is cause some type of life that might evolve in their universe to break into our universe.  This would be like humans on Earth devoting the next billion years to tricking God into re-creating slime molds after we're dead.  Whereas the things about themselves that intelligent life actually care about with and self-identify with are those things that distinguish them from their neighbors.  Their values will be directed mainly towards opposing the values of other members of their species.  None of those distinguishing traits can be implicit in the TM, and even if they could, they'd cancel each other out.)

Now, if they were able to encode a message to us in their output string, that might be more satisfying to them.  Like, maybe, "FUCK YOU, GOD!"

Honoring Petrov Day on LessWrong, in 2020

I think we learned that trolls will destroy the world.

Stupidity as a mental illness

It's only offensive if you still think of mental illness as shameful.

Stupidity as a mental illness

Me: We could be more successful at increasing general human intelligence if we looked at low intelligence as something that people didn't have to be ashamed of, and that could be remedied, much as how we now try to look at depression and other mental illness as illness--a condition which can often be treated and which people don't need to be ashamed of.

You: YOU MONSTER! You want to call stupidity "mental illness", and mental illness is a bad and shameful thing!

Group selection update

That's technically true, but it doesn't help a lot. You're assuming one starts with fixation to non-SC in a species. But how does one get to that point of fixation, starting from fixation of SC, which is more advantageous to the individual? That's the problem.

Group selection update

It's not that I no longer endorse it; it's that I replied to a deleted comment instead of to the identical not-deleted comment.

Group selection update
Group selection, as I've heard it explained before, is the idea that genes spread because their effects are for the good of the species. The whole point of evolution is that genes do well because of what they do for the survival of the gene. The effect isn't on the group, or on the individual, the species, or any other unit other than the unit that gets copied and inherited.

Group selection is group selection: selection of groups. That means the phenotype is group behavior, and the effect of selection is spread equally among members of the group. If the effect is death, this eliminates an entire group at once--and the nearer a selfish gene approaches fixation, the more likely it is to trigger a group extinction. Consider what would happen if you ran Axelrod's experiments with group selection implemented, so that groups went extinct if total payoff in the group below some threshold.

The key point is nonlinearity. If the group fitness function is a nonlinear function of the prevalence of a gene, then it dramatically changes fixation and extinction rates.

Well, maybe. If the plant has a typical set of recessive genes in its genome, self-fertilisation is a disaster. A few generations down the line, the self-fertilising plant will have plenty of genetic problems arising from recessive gene problems, and will probably die out. This means that self-fertilisation is bad - a gene for self-fertilisation will only prosper in those cases where it's not fertilising itself. It will do worse.

No. Self-fertilisation doesn't prevent cross-fertilisation. The self-fertilizer has just as many offspring from cross-fertilization as the self-sterile plant, but it has in addition clones of itself. Many of these clones may die, but if just one of them survives, it's still a gain.

Reply

Group selection update

You're assuming that the benefits of an adaptation can only be linear in the fraction of group members with that adaptation. If the benefits are nonlinear, then they can't be modeled by individual selection, or by kin selection, or by the Haystack model, or by the Harpending & Rogers model, in all of which the total group benefit is a linear sum of the individual benefits.

For instance, the benefits of the Greek phalanx are tremendous if 100% of Greek soldiers will hold the line, but negligible if only 99% of them do. We can guess--though I don't know if it's been verified--that slime mold aggregative reproduction can be maintained against invasion only because a slime mold aggregation in which 100% of the single-cell organisms play "fairly" in deciding which of them get to produce germ cells survives, while a slime mold aggregation in which just one cell's genome insisted on becoming the germ cell would die off in 2 generations. I think individual selection would predict the population would be taken over by that anti-social behavior.

Group selection update
Group selection, as I've heard it explained before, is the idea that genes spread because their effects are for the good of the species. The whole point of evolution is that genes do well because of what they do for the survival of the gene. The effect isn't on the group, or on the individual, the species, or any other unit other than the unit that gets copied and inherited.

Group selection is group selection: selection of groups. That means the phenotype is group behavior, and the effect of selection is spread equally among members of the group. If the effect is death, this eliminates an entire group at once--and the nearer a selfish gene approaches fixation, the more likely it is to trigger a group extinction. Consider what would happen if you ran Axelrod's experiments with group selection implemented, so that groups went extinct if total payoff in the group below some threshold.

The key point is nonlinearity. If the group fitness function is a nonlinear function of the prevalence of a gene, then it dramatically changes fixation and extinction rates.

Well, maybe. If the plant has a typical set of recessive genes in its genome, self-fertilisation is a disaster. A few generations down the line, the self-fertilising plant will have plenty of genetic problems arising from recessive gene problems, and will probably die out. This means that self-fertilisation is bad - a gene for self-fertilisation will only prosper in those cases where it's not fertilising itself. It will do worse.

No. Self-fertilisation doesn't prevent cross-fertilisation. The self-fertilizer has just as many offspring from cross-fertilization as the self-sterile plant, but it has in addition clones of itself. Many of these clones may die, but if just one of them survives, it's still a gain.

[This comment is no longer endorsed by its author]Reply
Load More