It's not like anything to be a bat

...at least not if you accept a certain line of anthropic argument.

Thomas Nagel famously challenged the philosophical world to come to terms with qualia in his essay "What is it Like to Be a Bat?". Bats, with sensory systems so completely different from those of humans, must have exotic bat qualia that we could never imagine. Even if we deduce all the physical principles behind echolocation, even if we could specify the movement of every atom in a bat's senses and nervous system that represents its knowledge of where an echolocated insect is, we still have no idea what it's like to feel a subjective echolocation quale.

Anthropic reasoning is the idea that you can reason conditioning on your own existence. For example, the Doomsday Argument says that you would be more likely to exist in the present day if the overall number of future humans was medium-sized instead of humongous, therefore since you exist in the present day, there must be only a medium-sized number of future humans, and the apocalypse must be nigh, for values of nigh equal to "within a few hundred years or so".

The Buddhists have a parable to motivate young seekers after enlightenment. They say - there are zillions upon zillions of insects, trillions upon trillions of lesser animals, and only a relative handful of human beings. For a reincarnating soul to be born as a human being, then, is a rare and precious gift, and an opportunity that should be seized with great enthusiasm, as it will be endless eons before it comes around again.

Whatever one thinks of reincarnation, the parable raises an interesting point. Considering the vast number of non-human animals compared to humans, the probability of being a human is vanishingly low. Therefore, chances are that if I could be an animal, I would be. This makes a strong anthropic argument that it is impossible for me to be an animal.

The phrase "for me to be an animal" may sound nonsensical, but "why am I me, rather than an animal?" is not obviously sillier than "why am I me, rather than a person from the far future?". If the doomsday argument is sufficient to prove that some catastrophe is preventing me from being one of a trillion spacefaring citizens of the colonized galaxy, this argument hints that something is preventing me from being one of a trillion bats or birds or insects.

And this could be that animals lack subjective experience. This would explain quite nicely why I'm not an animal: because you can't be an animal, any more than you can be a toaster. So Thomas Nagel can stop worrying about what it's like to be a bat, and the rest of us can eat veal and foie gras guilt-free.

But before we break out the dolphin sausages - this is a pretty weird conclusion. It suggests there's a qualitative and discontinuous difference between the nervous system of other beings and our own, not just in what capacities they have but in the way they cause experience. It should make dualists a little bit happier and materialists a little bit more confused (though it's far from knockout proof of either).

The most significant objection I can think of is that it is significant not that we are beings with experiences, but that we know we are beings with experiences and can self-identify as conscious - a distinction that applies only to humans and maybe to some species like apes and dolphins who are rare enough not to throw off the numbers. But why can't we use the reference class of conscious beings if we want to? One might as well consider it significant only that we are beings who make anthropic arguments, and imagine there will be no Doomsday but that anthropic reasoning will fall out of favor in a few decades.

But I still don't fully accept this argument, and I'd be pretty happy if someone could find a more substantial flaw in it.

189 comments, sorted by
magical algorithm
Highlighting new comments since Today at 2:59 AM
Select new highlight date

Considering the vast number of non-human animals compared to humans, the probability of being a human is vanishingly low. Therefore, chances are that if I could be an animal, I would be. This makes a strong anthropic argument that it is impossible for me to be an animal.

The anthropic principle creeps in again here, and methinks you missed it. The ability to make this argument is contingent upon being an entity capable of a certain level of formal introspection. Since you have enough introspection to make the argument, you can't be an animal. In your next million lives, so to speak, you won't be able to make this argument, though someone else out there will.

I'm sorry, but I'm a bit shocked how people on this site can seriously entertain ideas like "why am I me?" or "why do I live in the present?" except as early april's fool jokes. I am of course necessarily me because I call whoever I am me. And I live necessarily in the present because I call the time I live in the present. The question "Why am I not somebody else?" is nonsensical because for almost anybody I am somebody else. I think the confusion stems from treating your own consciousness at the same time as something special and not.

The question "Why am I not somebody else?" is nonsensical because for almost anybody I am somebody else.

More precisely: "I" refers to some numerically unique entity x. Thus "I is someone else" means x = -x which is an outright contradiction and we shouldn't waste our time asking why contradictions aren't the case.

It only sounds nonsensical because of the words in which it's asked. The question raised by anthropic reasoning isn't "why do I live in a time I call the present" (to which, as you say, the answer is linguistic - of course we'd call our time the present) but rather "why do I live in the year 2010?" or, most precisely of all, "Given that I have special access to the subjective experience of one being, why would that be the experience of a being born in the late 20th century, as opposed to some other time?"

That may still sound tautological - after all, if it wasn't the 20th century, it'd be somewhen else and we'd be asking the same question - but in fact it isn't. Consider these two questions:

  • Why am I made out of carbon, as opposed to helium?
  • Why do I live in the 20th century, as opposed to the 30th?

The correct answer to the second is not saying, "Well, if you were made out of helium, you could just ask why you were made out of helium, so it's a dumb question", it's pointing out the special chemical properties of carbon. Anthropic reasoning suggests that we can try doing the same to point out certain special properties of the 20th century.

The big difference is that the first question can be easily rephrased to "why are people made out of carbon and not of helium", but the second can't. But that difference isn't enough to make the second tautological or meaningless.

I think maybe some of this was meant for the comment above me.

That said I think the "I" really is the source of some if not all of these confusions and:

The big difference is that the first question can be easily rephrased to "why are people made out of carbon and not of helium", but the second can't. But that difference isn't enough to make the second tautological or meaningless.

I think the difference is exactly enough to make the second one tautological or meaningless. What you have to do is identify some characteristics of "I" and then ask: Why do entities of this type exist in the 20th century, as opposed to the 30th? If you have identified features that distinguish 20th century people from 30th century people you will have asked something interesting and meaningful.

How would you characterise and answer this question:

  • Why do I like to make paperclips, as opposed to other shapes into which I could form matter?

If 'you' lived in the 30th century you'd have different memories, at the very least, and thus 'you' would be a different person. That is to say, you wouldn't exist.

On the other hand, if the brain is reasonably substrate-independent, you could be exactly the same person if you were made out of helium.

A world different enough from this that you were made out of helium would probably leave you with different memories.

I'm a bit shocked how people on this site can seriously entertain ideas like "why am I me?" or "why do I live in the present?

Out of all of the questions we can ask, "why am I me?" is one of the most interesting, especially if done with the goal of being able to concisely explain it to other people. Your post is confusing to me, because I think "why am I me?" is not a nonsense question but "Why am I not somebody else" is a nonsense question.

Does anyone here think that "why am I me?" is actually a really easy question? What's the answer then, or how do I dissolve the question? I do not claim to understand the mystery of subjective experience. Where I stop understanding is something mysterious connected to the Born probabilities.

If "Why am I me?" is nonsense it does not follow that all discussions of subjective experience or even anthropic reasoning are nonsense.

Sure. I edited my post to try to make my thoughts on Tordmor's post more clear.

If you'd be any other animal on Earth, you wouldn't be considering what it would be like to be something else. Doomsday argument and arguments like it are usually formulated in a way "Of all the persons that could reason like me, only this small percentage ever were wrong". When animals are prevented, due to their neurological limitations, from reasoning as necessiated by the argument, they're not part of this consideration.

This doesn't mean that they're not sentient, it just means that by thinking about anthropic problems you're part of much narrower set of beings than just sentient ones.

Why not limit the set of people who could reason like me to "people who are using anthropic reasoning" and just assume people will stop using anthropic reasoning in the next hundred years? Is this a reductio ad absurdum, or do you think it's a valid conclusion?

Perhaps the fact that we are so confused by anthropic reasoning is a priori evidence that we are a very early anthropic reasoners and thus the Doomsday argument is false. Further, not every human is an anthropic reasoner. If the growth rate of anthropic reasoners is less than the growth rate of humans we should then extend the estimation of the lifespan of a human race with anthropic reasoners (and of course this says nothing about the lifespan of humanity without anthropic reasoners).

A handful of powerful anthropic reasoners could enforce a ban on anthropic reasoning: burning books, prohibiting it's teaching and silencing those who came to be anthropic reasoners on their own. If within two generations we could stabilize the anthropic reasoner population at around 35 (say 10 enforcing, 25 to account for enforcement failure) with life spans averaging 100 years that would put us in the final 95% (I think, anyone have an educated estimate of how many anthropic reasoners there have been up to this point in time?) until a permanent solution was reached or humanity began spreading and we would need at least one enforcer for every colony-- but given optimistic longevity scenarios we could still keep the anthropic reasoner population to a minimum. The permanent solution is probably obvious: A singleton could enforce the ban by itself and make itself the last or at least close to last anthropic reasoner in the galaxy.

The above strikes me as obviously insane so there has to be a mistake somewhere, right?

Maybe somebody will just come up with an elegant explanation of the underlying probability theory some time in the next few years, it'll go viral among the sorts of people who would otherwise have attempted anthropic reasoning, and the whole thing will go the way of geocentrism, but with fewer religiously-motivated defenders.

If within two generations we could stabilize the anthropic reasoner population at around 35 (say 10 enforcing, 25 to account for enforcement failure) with life spans averaging 100 years that would put us in the final 95% ...

That sounds like something Evidential Decision Theory would do, but not Timeless or Updateless Decision Theories. Unless you think that reaching a certain number of anthropic reasoners would cause human extinction.

Hmmm. Yes thats right, as far as I understand those theories at least. I guess my point is that something seems very wrong with an argument that makes predictions but offers nothing in the way of causal regularities whose variables could in principle be manipulated to alter the result. It isn't even like seen barometer indicate low pressure and then predicting a storm (while not understanding the variable that lead to the correlation of barometers indicating low pressure and storms coming): there isn't even any causal knowledge involved in the Doomsday argument afaict. Note that this isn't the case with all anthropic reasoning, it is peculiar to this argument. The only way we know of predicting the future is by knowing earlier conditions and rules governing those conditions over time: the Doomsday argument is thus an entirely knew way of making predictions. This suggests to me something has to be wrong with it.

Maybe the self-indication assumption is the way out, I can't tell if I would have the same problem with it.

You know that you are using anthropic reasoning, so you can limit yourself to the group of people using anthropic reasoning. You likewise know that your name is Yvain... so you can limit yourself to the group of people named Yvain?

"Why not limit the set of people who could reason like me to "people who are using anthropic reasoning" and just assume people will stop using anthropic reasoning in the next hundred years?"

That's known as the Doomsday argument, as far as I can tell.

My point, in a bit simplifying way, is that anthropic reasoning is only applicable to beings are capable of anthropic reasoning. If you know that there are billion agents, of which one thousand are capable of anthropic reasoning, and you know that of anthropic reasoners 950 are on island A and 50 are on the B, and all the non-anthropic reasoners are on island B, you know, based on anthropic reasoning, that you're on the island A 95% certainly. The rest of the agents simply don't matter. You can't conclude anything about those beyond that they're most likely not capable of anthropic reasoning

What happens if we replace "capable of anthropic reasoning" to "have considered the anthropic doomsday argument"? As far as I can tell, it becomes a tautology.

I'm not sure, but it seems that your tautology-way of putting it is simply more accurate, at the cost that using it requires more accurate a priori knowledge.

I argued before -- in the discussion of the Self-Indication Assumption -- that this is exactly the right anthropic reference class, namely people who make the sorts of considerations that I am engaging in. However, that doesn't show that people will just stop using anthropic reasoning. It shows that this is one possibility. On the other hand, it is still possible that people will stop using such reasoning because there will be no more people.

That's an interesting observation.

There's a problem in assuming that consciousness is a 0/1 property; that you're either conscious, or not.

There's another problem in assuming that YOU are a 0/1 property; that there is exactly one atomic "your consciousness".

Reflect on the discussion in the early chapters of Daniel Dennet's "Consciousness Explained", about how consciousness is not really a unitary thing, but the result of the interaction of many different processes.

An ant has fewer of these processes than you do. Instead of asking "What are the odds that 'I' ended up as me?", ask, "For one of these processes, what are the odds that it would end up in me, rather than in an ant?"

According to Wikipedia's entry on biomass, ants have 10-100 times the biomass of humans today.

According to Wikipedia's list of animals by neuron count, ants have 10,000 neurons.

According to that page, and this one, humans have 10^11 neurons.

Information is proportional not to the number of neurons, but to the number of patterns that can be stored in those neurons, which is likely somewhere between N and N^2. I'm gonna call it NlogN.

I weigh as much as 167,000 ants. Each of them has ~ 10,000 log(10,000) bits of info. I have ~ 10^11 log(10^11) bits of info. I contain as much information as 165 times my body-mass worth of ants.

So if we ignore how much longer ants have lived than humans, the odds are better that a random unit of consciousness today would turn up in a human, than in an ant.

(Also note that we can only take into account ants in the past, if reincarnation is false. If reincarnation is true, then you can't ask about the chances of you appearing in a different time. :) )

If you're gonna then say, "But let's not just compare ourselves to ants; let's ask about turning up in a human vs. turning up in any other species", then you have the dice-labelling problem argued below: You're claiming humans are the 1 on the die.

Information is proportional not to the number of neurons, but to the number of patterns that can be stored in those neurons,

No, it's proportional to the log of the number of patterns that can be (semi-stably) stored. E.g. n bits can store 2^n patterns.

which is likely somewhere between N and N^2. I'm gonna call it NlogN.

I'd like to see a lot more justification for this. If each connection were binary (it's not), and connections were possible between all N neurons (they're not), than we would have N^2 bits.

No, it's proportional to the log of the number of patterns that can be (semi-stably) stored. E.g. n bits can store 2^n patterns.

Oops! Correct. That's what I was thinking, which is why I said info NlogN for N neurons. N neurons => max N^2 connections, 1 bit per connection, max N^2 bits, simplest model.

The math trying to estimate the number of patterns that can be stored in different neural networks is horrendous. I've seen "proofs" for Hopfield network capacity ranging from, I think, N/logN to NlogN.

Anyway, it's more-than-proportional to N, if for no other reason than that the number of connections per neuron is related to the number of neurons. A human neuron has about 10,000 connections to other neurons. Ant neurons don't.

Humans are more analogous to an ant colony than to an individual ant, so that's where you should make the comparison: to a number of ant colonies with ant mass equal to your mass. Within each colony, you should treat each ant as a neuron in a large network, meaning you multiply the ant information not by the number of ants Na, but by Na log Na.

Assume 1000 ants/colony. You weight as much as 167 colonies. Letting N be the number of neurons in an ant (and measuring in Hartleys to make the math easier), each colony has

(N log N) (Na log Na)
= (1e4 log 1e4) (1e3 log 1e3) = 1.2e8 H

Multiplying by the number of colonies (since they don't act like a mega-colony) gives

1.2e8 H * 167
=2e10 H

This compares with the value for humans:

1e11 log 1e11
1.1e12 H

So that means you have ~55 times as much information per unit body weight, not that far from your estimate of 165.

I don't know what implications this calculation has for the topic, even assuming it's correct, but there you go.

Within each colony, you should treat each ant as a neuron in a large network

Good point!

This is a very intriguing line of thought. I'm not sure it makes sense, but it seem worth pondering further.

I weigh as much as 167,000 ants. Each of them has ~ 10,000 log(10,000) bits of info. I have ~ 10^11 log(10^11) bits of info. I contain as much information as 165 ants.

I'm not following your math here, and I'm especially not following the part where if a person contains as much information as 165 ants and there are 1 quadrillion ants and ~ 10 billion people, a given unit of information is more likely to end up in a human than in an ant. And since we do believe reincarnation is false, it's much worse than that, since ants have been around longer than humans.

Also, I have a philosophical objection with basing it on units consciousness. If we're to weight the chances of being a certain animal with the number of bits information they have, doesn't that imply we're working from a theory where "I" am a single bit of information? I'd much sooner say that I am all the information in my head equally, or an algorithm that processes that information, or at least not just a single bit of it.

Oops; that was supposed to say, "I contain as much information as 165 times my body-mass in ants".

I'm kinda disappointed that your objection was that the math didn't work, and not that I'm smarter than 165 ants. (I admit they are winning the battle over the kitchen counter. But that's gotta be, like, 2000 ants. Don't sell me short.)

If you want to say that you're all the information in your head equally, then you can't ask questions like "What are the odds I would have been an ant?"

The key point I will remember from reading this post is that the anthropic Doomsday argument can safely be put away in a box labelled 'muddled thinking about consciousness' alongside 'how can you get blue from not-blue?', 'if a tree falls in a forest with nobody there does it make a sound?' and 'why do quantum events collapse when someone observes them?'.

There are situations in which anthropic reasoning can be used but it is a mistake to think that this is because of the ability of a bunch of atoms to perform the class of processing we happen to describe as consciousness.

The probability of a randomly picked currently-living person having a Finnish nationality is less than 0.001. I observe myself being a Finn. What, if anything, should I deduce based on this piece of evidence?

The results of any line of anthropic reasoning are critically sensitive to which set of observers one chooses to use as the reference class, and it's not at all clear how to select a class that maximizes the accuracy of the results. It seems, then, that the usefulness of anthropic reasoning is limited.

That kind of anthropic reasoning is only useful in the context of comparing hypotheses, Bayesian style. Conditional probabilities matter only if they are different given different models.

For most possible models of physics, e.g. X and Y, P(Finn|X) = P(Finn|Y). Thus, that particular piece of info is not very useful for distinguishing models for physics.

OTOH, P(21st century|X) may be >> P(21st century|Y). So anthropic reasoning is useful in that case.

As for the reference class, "people asking these kinds of questions" is probably the best choice. Thus I wouldn't put any stock in the idea that animals aren't conscious.

Just think: In a universe that contains a countable infinity of conscious observers (but finite up to any given moment of time), people's heads would explode as they tried to cope with the not-even-well-defined probability of being born on or before their birth date.

"why am I me, rather than an animal?" is not obviously sillier than "why am I me, rather than a person from the far future?".

Well, quite. Both are absurd.

Can't I use the same reasoning to prove that non-Americans aren't conscious?

The anthropic principal only provides between 4 and 5 bits of evidence for this this theory, not nearly enough to support the complexity of the same brain structures being conscious in Americans but not in non-Americans.

All right, then. I got 33 bits that says everyone except me is unconscious!

This is actually a very good point. If the quantum mind hypothesis is false, then either subjective experience doesn't exist at all (which anyone who's reading this post ought to take as an empirically false statement) or solipsism is true and only a single subjective experience exists. 33 bits of info are just not nearly enough to explain how subjective experience is instantiated in billions of complex human brains each slightly different from all others, as opposed to a single brain.

If the quantum mind hypothesis is false, then either subjective experience doesn't exist at all (which anyone who's reading this post ought to take as an empirically false statement) or solipsism is true and only a single subjective experience exists.

Why's that?