Wiki Contributions


Book Review: Why Everyone (Else) Is a Hypocrite

He's saying that it's extremely hard to answer those questions about edge detectors. We have little agreement on whether we should be concerned about the experiences of bats or insects, and it's similarly unobvious whether we should worry about the suffering of edge detectors.

Being concerned implies 1) something has experiences 2) they can be negative / disliked in a meaningful way 3) we morally care about that.

I'd like to ask about the first condition: what is the set of things that might have experience, things whose experiences we might try to understand? Is there a principled or at least reasonable and consistent definition? Is there a reason to privilege edge detectors made from neurons over, say, a simple edge detector program made from code? Could other (complex, input-processing) tissues and organs have experience, or only those made from neurons?

Could the brain be logically divided in N different ways, such that we'd worry about the experience of a certain sub-network using division A, and not worry about a different sub-network using division B, but actually they're composed mostly of the same neurons, we just model them differently?

We talk about edge detectors mostly because they're simple and "stand-alone" enough that we located and modeled them in the brain. There are many more complex and less isolated parts of the brain we haven't isolated and modeled well yet; should that make us more or less concerned that they (or parts of them) have relevant experiences?

Finally, if very high-level parts of my brain ("I") have a good experience, while a theory leads us to think that lots of edge-detectors inside my brain are having a bad experiences ("I can't decide if that's an edge or not, help!"), how might a moral theory look that would resolve or trade-off these against each other?

Do you think you are a Boltzmann brain? If not, why not?

This is a question similar to "am I a butterfly dreaming that I am a man?". Both statements are incompatible with any other empirical or logical belief, or with making any predictions about future experiences. Therefore, the questions and belief-propositions are in some sense meaningless. (I'm curious whether this is a theorem in some formalized belief structure.)

For example, there's an argument about B-brains that goes: simple fluctuations are vastly more likely than complex ones; therefore almost all B-brains that fluctuate into existence will exist for only a brief moment and will then chaotically dissolve in a kind of time-reverse of their fluctuating into existence.

Should a B-brain expect a chaotic dissolution in its near future? No, because its very concepts of physics and thermodynamics that cause it to make such predictions are themselves the results of random fluctuations. It remembers reading arguments and seeing evidence for Boltzmann's theorem of enthropy, but those memories are false, the result of random fluctuations.

So a B-brain shouldn't expect anything at all (conditioning on its own subjective probability of being a B-brain). That means a belief in being a B-brain isn't something that can be tied to other beliefs and questioned.

Book Review: Why Everyone (Else) Is a Hypocrite

Let's take the US government as a metaphor. Instead of saying it's composed of the legislative, executive, and judicial modules, Kurzban would describe it as being made up of modules such as a White House press secretary

Both are useful models of different levels of the US government. Is the claim here that there is no useful model of the brain as a few big powerful modules that aggregate sub-modules? Or is it merely that others posit merely a few large modules, whereas Kurzban thinks we must model both small and large agents at once?

We don't ask "what is it like to be an edge detector?", because there was no evolutionary pressure to enable us to answer that question. It could be most human experience is as mysterious to our conscious minds as bat experiences.

If "human experience" includes the experience of an edge detector, I have to ask for a definition of "human experience". Is he saying an edge detector is conscious or sentient? What does it mean to talk of the experience of such a relatively small and simple part of the brain? Why should we care what its experience is like, however we define it?

Book Review: Open Borders

Finding the percentage of "immigrants" is misleading, since it's immigrants from Mexico and Central America who are politically controversial, not generic "immigrants" averaged over all sources.

I'm no expert on American immigration issues, but I presume this is because most immigrants come in through the (huge) south land border, and are much harder for the government to control than those coming in by air or sea.

However, I expect immigrants from any other country outside the Americas would be just as politically controversial if large numbers of them started arriving, and an open borders policy with Europe or Asia or Africa would be just as unacceptable to most Americans.

Are Americans much more accepting of immigrants from outside Central and South America?

Book Review: Open Borders

immigrants are barely different from natives in their political views, and they adopt a lot of the cultural values of their destination country.

The US is famous for being culturally and politically polarized. What does it even mean for immigrants to be "barely different from natives" politically? Do they have the same (polarized) spread of positions? Do they all fit into one of the existing political camps without creating a new one? Do they all fit into the in-group camp for Caplan's target audience?

And again:

[Caplan] finds that immigrants are a tiny bit more left-wing than the general population but that their kids and grandkids regress to the political mainstream.

If the US electorate is polarized left-right, does being a bit more left-wing mean a slightly higher percentage of immigrants than of natives are left-wing, but immigrants are still as polarized as the natives?

Contra Paul Christiano on Sex

bad configurations can be selected against inside the germinal cells themselves or when the new organism is just a clump of a few thousand cells

Many genes and downstream effects are only expressed (and can be selected on) after birthing/hatching, or only in adult organisms. This can include whole organs, e.g. mammal fetuses don't use their lungs in the womb. A fetus could be deaf, blind, weak, slow, stupid - none of this would stop it from being carried to term. An individual could be terrible at hunting, socializing, mating, raising grandchildren - none of that would stop it from being born and raised to adulthood.

There's no biological way to really test the effect of a gene ahead of time. So it's very valuable to get genes that have already been selected for beneficial effects outside of early development.

That's in addition to p.b.'s point about losing information.

Contra Paul Christiano on Sex

When you get an allele from sex, there are two sources of variance. One is genes your (adult) partner has that are different from yours. The other is additional de novo mutations in your partner's gametes.

The former has already undergone strong selection, because it was part of one (and usually many) generations' worth of successfully reproducing organisms. This is much better than getting variance from random mutations, which are more often bad than good, and can be outright fatal.

Selecting through many generations of gametes, like (human) sperm do, isn't good enough; it doesn't filter out bad mutations in genes that aren't expressed in sperm cells.

Lateral gene transfer might be as good as sex, but I don't see how higher mutation rates can compete. I believe that empirically, mutations that weaken one of the anti-mutation DNA preservation mechanisms in gametes are usually deleterious and are not selected.

This Can't Go On

I propose using computational resources as the "reference" good.

I don't understand the implications of this, can you please explain / refer me somewhere? How is the GDP measurement resulting from this choice going to be different from another choice like control of matter/energy? Why do we even need to make a choice, beyond the necessary assumption that there will still be a monetary economy (and therefore a measurable GDP)?

In the hypothetical future society you propose, most value comes from non-material goods.

That seems very likely, but it's not a necessary part of my argument. Most value could keep coming from material goods, if we keep inventing new kinds of goods (i.e. new arrangements of matter) that we value higher than past goods.

However, these non-material goods are produced by some computational process,. Therefore, buying computational resources should always be marginally profitable. On the other hand, the total amount of computational resources is bounded by physics. This seems like it should imply a bound on GDP.

There's a physical bound on how much computation can be done in the remaining lifetime of the universe (in our future lightcone). But that computation will necessarily take place over a very very long span of time.

For as long as we can keep computing, the set of computation outputs (inventions, art, simulated-person-lifetimes, etc) each year can keep being some n% more valuable than the previous year. The computation "just" needs to keep coming up with better things every year instead of e.g. repeating the same simulation over and over again. And this doesn't seem impossible to me.

This Can't Go On

I think that most people would prefer facing a 10e-6 probability of death to paying 1000 USD.

The sum of 1000 USD comes from the average wealth of people today. Using (any) constant here encodes the assumption that GDP per capita (wealth times population) won't keep growing.

If we instead suppose a purely relative limit, e.g. that a person is willing to pay a 1e-6 part of their personal wealth to avoid a 1e6 chance of death, then we don't get a bound on total wealth.

Load More