...at least not if you accept a certain line of anthropic argument.

Thomas Nagel famously challenged the philosophical world to come to terms with qualia in his essay "What is it Like to Be a Bat?". Bats, with sensory systems so completely different from those of humans, must have exotic bat qualia that we could never imagine. Even if we deduce all the physical principles behind echolocation, even if we could specify the movement of every atom in a bat's senses and nervous system that represents its knowledge of where an echolocated insect is, we still have no idea what it's like to feel a subjective echolocation quale.

Anthropic reasoning is the idea that you can reason conditioning on your own existence. For example, the Doomsday Argument says that you would be more likely to exist in the present day if the overall number of future humans was medium-sized instead of humongous, therefore since you exist in the present day, there must be only a medium-sized number of future humans, and the apocalypse must be nigh, for values of nigh equal to "within a few hundred years or so".

The Buddhists have a parable to motivate young seekers after enlightenment. They say - there are zillions upon zillions of insects, trillions upon trillions of lesser animals, and only a relative handful of human beings. For a reincarnating soul to be born as a human being, then, is a rare and precious gift, and an opportunity that should be seized with great enthusiasm, as it will be endless eons before it comes around again.

Whatever one thinks of reincarnation, the parable raises an interesting point. Considering the vast number of non-human animals compared to humans, the probability of being a human is vanishingly low. Therefore, chances are that if I could be an animal, I would be. This makes a strong anthropic argument that it is impossible for me to be an animal.

The phrase "for me to be an animal" may sound nonsensical, but "why am I me, rather than an animal?" is not obviously sillier than "why am I me, rather than a person from the far future?". If the doomsday argument is sufficient to prove that some catastrophe is preventing me from being one of a trillion spacefaring citizens of the colonized galaxy, this argument hints that something is preventing me from being one of a trillion bats or birds or insects.

And this could be that animals lack subjective experience. This would explain quite nicely why I'm not an animal: because you can't be an animal, any more than you can be a toaster. So Thomas Nagel can stop worrying about what it's like to be a bat, and the rest of us can eat veal and foie gras guilt-free.

But before we break out the dolphin sausages - this is a pretty weird conclusion. It suggests there's a qualitative and discontinuous difference between the nervous system of other beings and our own, not just in what capacities they have but in the way they cause experience. It should make dualists a little bit happier and materialists a little bit more confused (though it's far from knockout proof of either).

The most significant objection I can think of is that it is significant not that we are beings with experiences, but that we know we are beings with experiences and can self-identify as conscious - a distinction that applies only to humans and maybe to some species like apes and dolphins who are rare enough not to throw off the numbers. But why can't we use the reference class of conscious beings if we want to? One might as well consider it significant only that we are beings who make anthropic arguments, and imagine there will be no Doomsday but that anthropic reasoning will fall out of favor in a few decades.

But I still don't fully accept this argument, and I'd be pretty happy if someone could find a more substantial flaw in it.

New Comment
192 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Considering the vast number of non-human animals compared to humans, the probability of being a human is vanishingly low. Therefore, chances are that if I could be an animal, I would be. This makes a strong anthropic argument that it is impossible for me to be an animal.

The anthropic principle creeps in again here, and methinks you missed it. The ability to make this argument is contingent upon being an entity capable of a certain level of formal introspection. Since you have enough introspection to make the argument, you can't be an animal. In your next million lives, so to speak, you won't be able to make this argument, though someone else out there will.


If you'd be any other animal on Earth, you wouldn't be considering what it would be like to be something else. Doomsday argument and arguments like it are usually formulated in a way "Of all the persons that could reason like me, only this small percentage ever were wrong". When animals are prevented, due to their neurological limitations, from reasoning as necessiated by the argument, they're not part of this consideration.

This doesn't mean that they're not sentient, it just means that by thinking about anthropic problems you're part of much narrower set of beings than just sentient ones.

Why not limit the set of people who could reason like me to "people who are using anthropic reasoning" and just assume people will stop using anthropic reasoning in the next hundred years? Is this a reductio ad absurdum, or do you think it's a valid conclusion?

Perhaps the fact that we are so confused by anthropic reasoning is a priori evidence that we are a very early anthropic reasoners and thus the Doomsday argument is false. Further, not every human is an anthropic reasoner. If the growth rate of anthropic reasoners is less than the growth rate of humans we should then extend the estimation of the lifespan of a human race with anthropic reasoners (and of course this says nothing about the lifespan of humanity without anthropic reasoners). A handful of powerful anthropic reasoners could enforce a ban on anthropic reasoning: burning books, prohibiting it's teaching and silencing those who came to be anthropic reasoners on their own. If within two generations we could stabilize the anthropic reasoner population at around 35 (say 10 enforcing, 25 to account for enforcement failure) with life spans averaging 100 years that would put us in the final 95% (I think, anyone have an educated estimate of how many anthropic reasoners there have been up to this point in time?) until a permanent solution was reached or humanity began spreading and we would need at least one enforcer for every colony-- but given optimistic longevity scenarios we could still keep the anthropic reasoner population to a minimum. The permanent solution is probably obvious: A singleton could enforce the ban by itself and make itself the last or at least close to last anthropic reasoner in the galaxy. The above strikes me as obviously insane so there has to be a mistake somewhere, right?
That sounds like something Evidential Decision Theory would do, but not Timeless or Updateless Decision Theories. Unless you think that reaching a certain number of anthropic reasoners would cause human extinction.
Hmmm. Yes thats right, as far as I understand those theories at least. I guess my point is that something seems very wrong with an argument that makes predictions but offers nothing in the way of causal regularities whose variables could in principle be manipulated to alter the result. It isn't even like seen barometer indicate low pressure and then predicting a storm (while not understanding the variable that lead to the correlation of barometers indicating low pressure and storms coming): there isn't even any causal knowledge involved in the Doomsday argument afaict. Note that this isn't the case with all anthropic reasoning, it is peculiar to this argument. The only way we know of predicting the future is by knowing earlier conditions and rules governing those conditions over time: the Doomsday argument is thus an entirely knew way of making predictions. This suggests to me something has to be wrong with it. Maybe the self-indication assumption is the way out, I can't tell if I would have the same problem with it.
Maybe somebody will just come up with an elegant explanation of the underlying probability theory some time in the next few years, it'll go viral among the sorts of people who would otherwise have attempted anthropic reasoning, and the whole thing will go the way of geocentrism, but with fewer religiously-motivated defenders.
You know that you are using anthropic reasoning, so you can limit yourself to the group of people using anthropic reasoning. You likewise know that your name is Yvain... so you can limit yourself to the group of people named Yvain?
That's known as the Doomsday argument, as far as I can tell. My point, in a bit simplifying way, is that anthropic reasoning is only applicable to beings are capable of anthropic reasoning. If you know that there are billion agents, of which one thousand are capable of anthropic reasoning, and you know that of anthropic reasoners 950 are on island A and 50 are on the B, and all the non-anthropic reasoners are on island B, you know, based on anthropic reasoning, that you're on the island A 95% certainly. The rest of the agents simply don't matter. You can't conclude anything about those beyond that they're most likely not capable of anthropic reasoning
What happens if we replace "capable of anthropic reasoning" to "have considered the anthropic doomsday argument"? As far as I can tell, it becomes a tautology.
I'm not sure, but it seems that your tautology-way of putting it is simply more accurate, at the cost that using it requires more accurate a priori knowledge.
I argued before -- in the discussion of the Self-Indication Assumption -- that this is exactly the right anthropic reference class, namely people who make the sorts of considerations that I am engaging in. However, that doesn't show that people will just stop using anthropic reasoning. It shows that this is one possibility. On the other hand, it is still possible that people will stop using such reasoning because there will be no more people.

I'm sorry, but I'm a bit shocked how people on this site can seriously entertain ideas like "why am I me?" or "why do I live in the present?" except as early april's fool jokes. I am of course necessarily me because I call whoever I am me. And I live necessarily in the present because I call the time I live in the present. The question "Why am I not somebody else?" is nonsensical because for almost anybody I am somebody else. I think the confusion stems from treating your own consciousness at the same time as something special and not.

Out of all of the questions we can ask, "why am I me?" is one of the most interesting, especially if done with the goal of being able to concisely explain it to other people. Your post is confusing to me, because I think "why am I me?" is not a nonsense question but "Why am I not somebody else" is a nonsense question. Does anyone here think that "why am I me?" is actually a really easy question? What's the answer then, or how do I dissolve the question? I do not claim to understand the mystery of subjective experience. Where I stop understanding is something mysterious connected to the Born probabilities.
If "Why am I me?" is nonsense it does not follow that all discussions of subjective experience or even anthropic reasoning are nonsense.
Sure. I edited my post to try to make my thoughts on Tordmor's post more clear.
More precisely: "I" refers to some numerically unique entity x. Thus "I is someone else" means x = -x which is an outright contradiction and we shouldn't waste our time asking why contradictions aren't the case.

It only sounds nonsensical because of the words in which it's asked. The question raised by anthropic reasoning isn't "why do I live in a time I call the present" (to which, as you say, the answer is linguistic - of course we'd call our time the present) but rather "why do I live in the year 2010?" or, most precisely of all, "Given that I have special access to the subjective experience of one being, why would that be the experience of a being born in the late 20th century, as opposed to some other time?"

That may still sound tautological - after all, if it wasn't the 20th century, it'd be somewhen else and we'd be asking the same question - but in fact it isn't. Consider these two questions:

  • Why am I made out of carbon, as opposed to helium?
  • Why do I live in the 20th century, as opposed to the 30th?

The correct answer to the second is not saying, "Well, if you were made out of helium, you could just ask why you were made out of helium, so it's a dumb question", it's pointing out the special chemical properties of carbon. Anthropic reasoning suggests that we can try doing the same to point out certain special properties of the 20th century.

The big difference is that the first question can be easily rephrased to "why are people made out of carbon and not of helium", but the second can't. But that difference isn't enough to make the second tautological or meaningless.


I think maybe some of this was meant for the comment above me.

That said I think the "I" really is the source of some if not all of these confusions and:

The big difference is that the first question can be easily rephrased to "why are people made out of carbon and not of helium", but the second can't. But that difference isn't enough to make the second tautological or meaningless.

I think the difference is exactly enough to make the second one tautological or meaningless. What you have to do is identify some characteristics of "I" and then ask: Why do entities of this type exist in the 20th century, as opposed to the 30th? If you have identified features that distinguish 20th century people from 30th century people you will have asked something interesting and meaningful.

How would you characterise and answer this question: * Why do I like to make paperclips, as opposed to other shapes into which I could form matter?
If 'you' lived in the 30th century you'd have different memories, at the very least, and thus 'you' would be a different person. That is to say, you wouldn't exist. On the other hand, if the brain is reasonably substrate-independent, you could be exactly the same person if you were made out of helium.
A world different enough from this that you were made out of helium would probably leave you with different memories.

The key point I will remember from reading this post is that the anthropic Doomsday argument can safely be put away in a box labelled 'muddled thinking about consciousness' alongside 'how can you get blue from not-blue?', 'if a tree falls in a forest with nobody there does it make a sound?' and 'why do quantum events collapse when someone observes them?'.

There are situations in which anthropic reasoning can be used but it is a mistake to think that this is because of the ability of a bunch of atoms to perform the class of processing we happen to describe as consciousness.

what do you mean by "how can you get blue from not-blue"?

The probability of a randomly picked currently-living person having a Finnish nationality is less than 0.001. I observe myself being a Finn. What, if anything, should I deduce based on this piece of evidence?

The results of any line of anthropic reasoning are critically sensitive to which set of observers one chooses to use as the reference class, and it's not at all clear how to select a class that maximizes the accuracy of the results. It seems, then, that the usefulness of anthropic reasoning is limited.

That kind of anthropic reasoning is only useful in the context of comparing hypotheses, Bayesian style. Conditional probabilities matter only if they are different given different models. For most possible models of physics, e.g. X and Y, P(Finn|X) = P(Finn|Y). Thus, that particular piece of info is not very useful for distinguishing models for physics. OTOH, P(21st century|X) may be >> P(21st century|Y). So anthropic reasoning is useful in that case. As for the reference class, "people asking these kinds of questions" is probably the best choice. Thus I wouldn't put any stock in the idea that animals aren't conscious.
Just think: In a universe that contains a countable infinity of conscious observers (but finite up to any given moment of time), people's heads would explode as they tried to cope with the not-even-well-defined probability of being born on or before their birth date.

That's an interesting observation.

There's a problem in assuming that consciousness is a 0/1 property; that you're either conscious, or not.

There's another problem in assuming that YOU are a 0/1 property; that there is exactly one atomic "your consciousness".

Reflect on the discussion in the early chapters of Daniel Dennet's "Consciousness Explained", about how consciousness is not really a unitary thing, but the result of the interaction of many different processes.

An ant has fewer of these processes than you do. Instead of asking "What are the odds that 'I' ended up as me?", ask, "For one of these processes, what are the odds that it would end up in me, rather than in an ant?"

According to Wikipedia's entry on biomass, ants have 10-100 times the biomass of humans today.

According to Wikipedia's list of animals by neuron count, ants have 10,000 neurons.

According to that page, and this one, humans have 10^11 neurons.

Information is proportional not to the number of neurons, but to the number of patterns that can be stored in those neurons, which is likely somewhere between N and N^2. I'm gonna call it NlogN.

I weigh as much as 167,000 ants. Each... (read more)

No, it's proportional to the log of the number of patterns that can be (semi-stably) stored. E.g. n bits can store 2^n patterns. I'd like to see a lot more justification for this. If each connection were binary (it's not), and connections were possible between all N neurons (they're not), than we would have N^2 bits.
Oops! Correct. That's what I was thinking, which is why I said info NlogN for N neurons. N neurons => max N^2 connections, 1 bit per connection, max N^2 bits, simplest model. The math trying to estimate the number of patterns that can be stored in different neural networks is horrendous. I've seen "proofs" for Hopfield network capacity ranging from, I think, N/logN to NlogN. Anyway, it's more-than-proportional to N, if for no other reason than that the number of connections per neuron is related to the number of neurons. A human neuron has about 10,000 connections to other neurons. Ant neurons don't.
Humans are more analogous to an ant colony than to an individual ant, so that's where you should make the comparison: to a number of ant colonies with ant mass equal to your mass. Within each colony, you should treat each ant as a neuron in a large network, meaning you multiply the ant information not by the number of ants Na, but by Na log Na. Assume 1000 ants/colony. You weight as much as 167 colonies. Letting N be the number of neurons in an ant (and measuring in Hartleys to make the math easier), each colony has (N log N) (Na log Na) = (1e4 log 1e4) (1e3 log 1e3) = 1.2e8 H Multiplying by the number of colonies (since they don't act like a mega-colony) gives 1.2e8 H * 167 =2e10 H This compares with the value for humans: 1e11 log 1e11 1.1e12 H So that means you have ~55 times as much information per unit body weight, not that far from your estimate of 165. I don't know what implications this calculation has for the topic, even assuming it's correct, but there you go.
Good point!
This is a very intriguing line of thought. I'm not sure it makes sense, but it seem worth pondering further.
0Scott Alexander
I'm not following your math here, and I'm especially not following the part where if a person contains as much information as 165 ants and there are 1 quadrillion ants and ~ 10 billion people, a given unit of information is more likely to end up in a human than in an ant. And since we do believe reincarnation is false, it's much worse than that, since ants have been around longer than humans. Also, I have a philosophical objection with basing it on units consciousness. If we're to weight the chances of being a certain animal with the number of bits information they have, doesn't that imply we're working from a theory where "I" am a single bit of information? I'd much sooner say that I am all the information in my head equally, or an algorithm that processes that information, or at least not just a single bit of it.
Oops; that was supposed to say, "I contain as much information as 165 times my body-mass in ants". I'm kinda disappointed that your objection was that the math didn't work, and not that I'm smarter than 165 ants. (I admit they are winning the battle over the kitchen counter. But that's gotta be, like, 2000 ants. Don't sell me short.) If you want to say that you're all the information in your head equally, then you can't ask questions like "What are the odds I would have been an ant?"

Can't I use the same reasoning to prove that non-Americans aren't conscious?

The anthropic principal only provides between 4 and 5 bits of evidence for this this theory, not nearly enough to support the complexity of the same brain structures being conscious in Americans but not in non-Americans.

All right, then. I got 33 bits that says everyone except me is unconscious!

This is actually a very good point. If the quantum mind hypothesis is false, then either subjective experience doesn't exist at all (which anyone who's reading this post ought to take as an empirically false statement) or solipsism is true and only a single subjective experience exists. 33 bits of info are just not nearly enough to explain how subjective experience is instantiated in billions of complex human brains each slightly different from all others, as opposed to a single brain.
Why's that?
Because "I am my brain" is actually an extremely complex hypothesis; you need to relate all of your inner subjective experience to brain states, action potentials, firing patterns and what not. Since all brains are actually slightly different from one another (at least from a purely physical point of view), the hypothesis that other brains also have subjective experience is untenable due to its sheer complexity.
That's like saying that "there is a prime number greater than 3^^^3" is an extremely complex and therefore untenable hypothesis, because such a number needs to be coprime to all of the natural numbers below it. Every possible way to realize the hypothesis "I am my brain" is extremely unlikely, but there are extremely many ways to realize it. A disjunction of lots of unlikely things need not be unlikely.
No, there aren't. The physical state of your brain is known, and (assuming physicalism/epiphenomenalism/property dualism is true) the physical state must explain everything you might claim about your subjective experience. Either you're a p-zombie and do not actually have subjective experience, or this explanation must be evaluated for simplicity on Occam's razor/Solomonoff induction grounds.
You've managed to confuse me. I suspect, though, that this analogy is relevant: What is the probability that the text between the quotation marks in this paragraph is "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Fusce id velit urna, ac sollicitudin libero. Phasellus ac rutrum nisl. In volutpat scelerisque justo, non congue diam vestibulum sit amet. Donec."? The prior probability of this being true is minuscule, looking something like 10^-60; therefore, you might as well rule it out now. On the other hand, I suspect that we don't actually disagree at all. After all, you seem to be arguing for a position I agree with; I'm simply not sure whether you're arguing correctly or not.
Prior to what exactly? I do have a prior for "a randomly generated ascii string of that particular length being the same as the string given". I wouldn't be able to know to use that as the prior unless I have already been given some information. Then there is all the knowledge of human languages and cultural idiosyncracies I happen to have. Which of those am I allowed to consider? It's hard to tell since, well, you've alreay given me the answer. It's a bit post for any 'prior' except meta-uncertainty. I would need a specific counter-factual state of knowledge to be able to give a reasonable prior. (All of which I believe supports your point.)
This seems to be a case of extraordinary claims are extraordinary evidence. It's like saying, "well yes, the fact that I have a brain is pretty extraordinary, but so what? I clearly have one". It doesn't distinguish between a Boltzmann brain and a brain arising normally via natural selection. So is your consciousness a Boltzmann consciousness?
I don't see any justification for the connecting "since".
You believe that a single mapping between physical brains and subjective experiences can apply to all humans? What does this mapping look like? How many bits are needed to fully specify it?
Is it reasonable to expect me to necessarily be able to answer that question if materialism is true? (What does the mapping from entangled quantum states to experiences look like?)
It's about as reasonable as demanding an account of how the brain can mantain mesoscopic quantum superpositions long enough to influence neural processes. We don't know, but it's going to be far simpler than a mapping from classical brains. (For all we know, it could be trivial; perhaps each quale is a GENSYM which maps directly to a basis of the quantum system.)
What does that have to do with the quantum mind hypothesis?
That allows you to replace "I am my brain" with "I am a complex quantum state which is instantiated by my brain; and my inner experience maps directly to this quantum state." Other brains have evolved to maintain quantum states in the same way, hence they also have subjective experience.
That doesn't make a difference wrt your argument.
0Scott Alexander
Not unless you have a strong reason to privilege the state of being an American as especially interesting. Otherwise, you're in the position Jordan mentioned of just knowing you're in one unexceptional condition out of many. One thing you could say based on your being an American is that you have weak evidence that America is likely to be one of the more populous countries, and strong evidence that there's no country thousands or billions of times more populous than America. Both conclusions are correct. And further, if a Luxembourgian posts a reply here saying "My Luxembourgian citizenship disproves the anthropic principle", that doesn't count, because you're not him and he's self-selected by posting here o_O
So we seem to have concluded that my Irish citizenship disproves the anthropic principle, and I can know this, but you cannot know it :-)
3Scott Alexander
As a matter of fact, I live in Ireland (although I'm a US citizen). That coincidence probably disproves some sort of important principle right there. I think you've mentioned before that you live in Dublin; I live in Cork, so sadly we're a little too far to meet up for a chat one night.
It probably does :-) Yeah, a little too far, but let me know if you're going to be in Dublin at any stage, and I'll do likewise if I'm going to be in Cork.

"why am I me, rather than an animal?" is not obviously sillier than "why am I me, rather than a person from the far future?".

Well, quite. Both are absurd.

It still makes more sense to me than quantum mechanics. However, I think that's primarily my own failing to learn the latter.

I'm becoming more skeptical of anthropics every day.

I think that anthropics is a useless distraction, but until I've worked out why it's a useless distraction it still gets in the way of everything.
People don't understand the difference between extreme improbability and actual impossibility. "I observe that I exist, therefore some mysterious 'great filter' will soon wipe out all humanity" is as innumerate a mistake as winning $10^6 with your first-ever lottery ticket and then immediately spending the entire sum on more tickets, because (based on that initial evidence) it's got the biggest, fastest ROI. We have only what we observe, and what we observe is a green world in a mostly empty universe. Perhaps, in gaining that much, we were very, very lucky; perhaps not. Either way, does it change anything? We have only what we observe. Make the most of it.

At one time I wondered, why am I not a particle? The anthropic "explanation" is that particles can't be conscious. But that doesn't remove the prior improbability of my existence in this form. Empirically I know I'm conscious, so being a particle (under the usual assumptions) has a posterior probability of zero. But if I think of myself as a random sample from the set of all entities - and why shouldn't I? - then my apriori probability of having been conscious is vanishingly small. (Unless I change my notion of reality rather radically.)

Let's look at examples where we know the 'right' answer:

Someone flips a coin. If it's heads they copy you a thousand times and put 1 of you in a green room and 999 of you in a red room. If it's tails they do the opposite.

You wake up in a green room and conclude that the coin was likely tails.

Now assume that in addition to copying you 1000 times, 999 of you were randomly selected to have the part of your brain that remembers to apply anthropic reasoning erased. You wake up in a green room and remember to apply the anthropic principle, but, knowing that you ... (read more)

I think your intuitions lead you astray at exactly this point. Suppose that the 1000 of you are randomly 'tagged' with distinct id numbers from the set {1,...,1000}, and that a clone learns its id number upon waking. Suppose you wake in a green room and see id number 707. If all the clones remember to apply anthropic reasoning (assuming for argument's sake that my current line of reasoning is 'anthropic') then you can easily work out that the probability of the observed event "number 707 is an anthropic reasoner in a green room" is 1/1000 if coin was heads or 999/1000 if coin was tails. However, if 998 clones have their 'anthropic reasoning' capacity removed then both probabilities are 1/1000, and you should conclude that heads and tails are equally likely.
Are you sure? In the earlier model where memory erasure is random, remembering AR will be an independent event from the room placements and won't tell you anything extra about that.
(Note: I got the numbers slightly wrong - the 1001s should have been 1000s etc.) Yes: If the coin was heads then the probability of event "clone #707 is in a green room" is 1/1000. And since, in this case, the clone in the green room is sure to be an anthropic reasoner, the probability of "clone #707 is an anthropic reasoner in a green room" is still 1/1000. On the other hand, if the coin was tails then the probability of "clone #707 is in a green room" is 999/1000. However, clone #707 also knows that "clone #707 is an AR", and P(#707 is AR | coin was tails and #707 is in a green room) is only 1/999. Therefore, P(#707 is an AR in a green room | coin was tails) is (999/1000) * (1/999) = 1/1000.
But you know that you are AR in the exact same way that you know that you are in a green room. If you're taking P(BeingInGreenRoom|CoinIsHead)=1/1000, then you must equally take P(AR)=P(AR|CoinIsHead)=P(AR|BeingInGreenRoom)=1/1000. Why shouldn't it be 1/1000? The lucky clone who gets to retain AR is picked at random among the entire thousand, not just the ones in the more common type of room.
Doh! Looks like I was reasoning about something I made up myself rather than Jordan's comment.
I like this example because it has nice tidy prior probabilities. That's very much lacking in the Doomsday Argument - how do you distribute a prior over a value that has no obvious upper bound? For any finite number of people that will ever live, is there much greater than zero prior probability of that being the number? Even if I can identify something truly special about the reference class "among the first 100 billion people" as opposed to any other mathematically definable group - and thus push down the posterior probabilities of very large numbers of people eventually living - it doesn't seem to push down very far.

Following bogus, I could imagine endorsing a weaker form of the argument: not that it's like nothing to be a bat, but that it's like less to be a bat than to be a human.

In fact, if you've ever wondered why you happen to be the person you are, and not someone else, it may be that the reflectivity you are displaying by asking this question puts you in a more-strongly-anthropically-weighted reference class.

0Scott Alexander
Given 10 billion bats , that bats have been around for 50 million years, and bat generations taking let's say 5 years, and assuming that population has been stable for evolutionary history, we have a super rough estimate of something on the order of (10B * (50M/5)) = 100 quadrillion historical bats. I think a lot of anthropic calculations assume there have been 100 billion historical humans, so probability of being a human is 1/1 millionth the probability of being a bat. I don't see a whole lot of difference between not having subjective experiences and having one one-millionth the subjective experience of a human. Once we expand this to all animals instead of just bats, the animals come out even worse.
I'm not sure it follows that a bat has one one-millionth the subjective experience of a human. The problem is that you can't necessarily add a bunch of bat-experiences together to get something equivalent to a human experience; in fact, it seems to me that this sort of additivity only holds when the experiences are coherently connected to each other. (If someone hooked up a million bat-brains into a giant network, then it might make sense to ask "Why am I a human, rather than a million bats"?) So it may be, for instance, that each bat has 10% the subjective experience of a human, but that that extra 90% makes it millions of times more probable that the experiencer will be pondering this question.
Is there a difference between having no subjective experience and having one-millionth the subjective experience of a Tra'bilfin, which are advanced aliens with artificially augmented brains capable of a million times the processing of a current human?
You don't have any issues quantifying over fractions of subjective experience? I haven't begun to have a clear idea what that even means.

and the rest of us can eat veal and foie gras guilt-free.

I don't think this works.

Obama can use the same argument to decide that, since if he could have been any person, it would be vanishingly likely that he'd be the president of the most powerful nation on earth. Thus, clearly, the rest of us (he would conclude) have no conscious experience, and he had better go ahead and be an egoist, and run the country in whatever way gives him the most personal gain.

I don't want Obama to do this, so I think I had better not do it either.

Same argument as here, I don't think 33 bits is enough to support the complexity penalty of the prior. This is kind of scary though, if I imagine the emperor of a multi-galactic civilization, eventually the population is large enough. It seems unlikely though, even discounting speed of light issues, that a civilization of that size would be united under one single most powerful person.
The argument still shouldn't work though. Every one of those bits of evidence that you're the only guy around is counterbalanced by a doubling of the negative consequences if you're wrong. So yes, maybe Obama should assume he's probably the only guy on earth, but his actions matter so massively much more in the tiny branch where he's really the most powerful man in a world of billions of thinking living people, that he should still be working to optimize for it.

Considering the vast number of non-human animals compared to humans, the probability of being a human is vanishingly low. Therefore, chances are that if I could be an animal, I would be. This makes a strong anthropic argument that it is impossible for me to be an animal.

Only in the sense that it's impossible for you to be a rock, or a tree, or an alien, or another person, because you clearly aren't any of those things. All this tells you is that you should be nearly 100% certain that you are you, and that's no great insight.

The anthropic principle seems to imply that our subjective experiences take place in amazingly common ancestor simulations that don't simulate animals in sufficient detail to give them subjective experience. That I find myself experiencing being a human rather than being a bat, even though bats are in principle capable of subjective experience, is because there are vastly more detailed simulations of humans than of bats.

You mean, you believe the anthropic principle is justified only if you assume that most people exist in simulations?

The fact that you are human is evidence that only humans are conscious, but it's far from proof. If you have no a priori reason to believe that only humans are conscious, that means it's just as likely that it's only humans as only bats. If the a priori probability of all animals being conscious is only the same as the probability that it's just a given species (I'd say it's much, much larger), and it's impossible for it to just be two species etc., then a posteriori, there would still be a 50:50 chance that all animals are conscious.

Of course, there is an... (read more)

(Psst: almost all animals are sentient (have senses), you might be thinking of sapient (conscious, having thoughts)).
I thought sentient was having qualia and sapient was intelligent thought. I just checked a few dictionaries (Wikipedia, Dict.org etc.). It looks like my usage is the more common one.
Qualia is a confused concept and doesn't really exist as such, so that may not be the best way to phrase it.
"Qualia" is effectively a name for all those properties which constitute your experience of the world, but which do not exist in the current ontology of natural science (thus we have the spectacle of people on this site needing to talk about "how it feels" to be a brain or a computer program, an additional property instinctively tacked on to the physical description precisely to make up for this lack). This is a problem that has been building in scientific culture for centuries, ever since a distinction between primary and secondary properties was introduced. Mathematical physics raised the description and analysis of the "primary" properties - space, quantity, causality - to a high art, while the "secondary" properties - all of sensation, to begin with, apart from the bare geometric form of things - were put to one side. And there have always been a few people so enraptured by the power of physics and related disciplines that they were prepared to simply deny the existence of the ontological remainder (just as there have been "irrationalists" who were really engaged in affirming the reality of what was being denied). We are now at the stage of figuring out rather detailed correlations between parts and states of the brain, described in material terms, and aspects of conscious experience, as experienced and reported "subjectively" or "in the first person". But a correlation is not yet an identity (and the verifiable correlations are still mostly of the form "X has something to do with Y"). Mostly people are being property dualists without realizing it: they believe their experiences are real, they believe those experiences are identical with brain states, but out of sheer habit they haven't noticed that the two sides of the identity are actually quite different ontologically. Dennett belongs to that minority of materialists, more logically consistent but also more in denial of reality, who really are trying to deny the existence of the secondary properties, now k
Suppose on Tuesday I perceive object O as red. For labeling convenience, I'm going to start referring to my subjective experience of that perception as . In other words, on Tuesday I experience O as . If I've understood you, you claim the is due in part to color qualia in some way associated with O, which are distinct from the set of things happening inside my skull. So, OK, assuming that, some questions. I assume we agree that if I suddenly become color-blind, I might suddenly stop experiencing . Do you assert that in that case the -causing qualia continue to exist, I just stop experiencing them? (I would say something analogous about photons and perception, for example, if I suddenly lose my eyes.) Or do you assert that they stop existing? Or something else? Either way: is that assertion something someone has confirmed in some way, or is it a purely theoretical prediction? I assume we agree that if I suddenly manifest synesthesia -- say, due to a stroke -- I might also start experiencing a honking car horn as . I assume you would therefore say that there must be -causing qualia present, since my brain is unable to construct on its own. Do you assert that the -causing qualia were always present, and I've only just become able to perceive them? Or that they became present when I had the stroke, but not previously? Or something else? Again: is that assertion something confirmed or theoretical?
No. I think that in reality, is in the head. But our current physical ontology contains no such entity. That is why I say that if you accept our current physical ontology, you're either an eliminativist or a dualist.
I'm not in the least bit interested in the labels. But yes, if we're agreed that is constructed by my brain, rather than being a property of my environment, then I don't understand what grounds you have for believing that isn't explicable by entities in our current physical ontology.
Just imagine if you were having a discussion with someone who said that the world is made of numbers. And you picked up a rock and said, so, this rock is made of numbers? And they said, sure. And you said, that's absurd. How could a rock be equal to 1+1, for example? They're completely different kinds of things. And they went off on a riff about how science has shown that all is number, and whenever you tried to point out the non-numerical aspects of reality, they'd just subsume that back into the all-is-number reductionism, and they'd stubbornly insist that, even if the rock was not equal to 1+1, it might be equal to some other numbers, and besides, what other sort of things could there be, besides numbers? For me, the idea that is identical to some arrangement of particles in space is just like saying that 1+1 is a rock. The gulf between the nature of the allegedly identical entities is so great that the problem with the assertion ought to be obvious. In a sprinkling of point objects throughout space, where is the color? It's really that simple. It's just not there. It's not intrinsically there, anyway. You might propose that redness is a property of certain special configurations, but when you say that, you've embarked upon a form of dualism, property dualism. It's a dualism because on the one side, you have properties which are intrinsic to a geometrically defined situation, like distances and angles and shapes; and on the other side, you have properties which are logically independent of the geometric facts and have to be posited separately. For example, the existence of color experiences, or indeed any kind of experiences, in a brain. In other words, the onus is on you to explain just what you think the connection is between arrangements of particles in space (e.g. a brain), and experiences of color. I have my own answer, but I want to hear yours first.
You won't find my answer interesting, but since you asked: I think experiences of color are among the states that particles in space can get into, just as the impulse to blink is a state particles in space can get into, just as a predisposition to generate meaningful English but not German sentences is a state that particles in space can get into, just as an appreciation for 17th-century Romanian literature is a state that particles in space can get into, just as a contagious head cold is a state that particles in space can get into. (Which is not to say that all of those are the same kinds of states.) We can certainly populate our ontologies with additional entities related to those various things if we wish... color qualia and motor-impulse qualia and English qualia and German qualia and 17th-century Romanian literary qualia and contagious head cold qualia and so forth. I have no problem with that in and of itself, if positing these entities is useful for something. But before I choose to do so, I want to understand what use those entities have to offer me. Populating my ontology with useless entities is silly. I understand that this hesitation seems to you absurd, because you believe it ought to seem obvious to me that arrangements of matter simply aren't the kind of thing that can be an experience of color, just like it should seem obvious that numbers aren't the kind of thing that can be a rock, just as it seems obvious to Searle that formal rules aren't the kind of thing that can be an understanding of Chinese, just as it seemed obvious to generations of thinkers that arrangements of matter aren't the kind of thing that can be an infectious living cell. These things aren't, in fact, obvious to me. If you have reasons for believing any of them other than their obviousness, I might find those reasons compelling, but repeated assertions of their obviousness are not.
An arrangement of particles in space can embody a blink reflex with no problems, because blinking is motion, and so it just means they're changing position in space. Generating meaningful sentences - here we begin to run into problems, though not so severe as the problem with color. If the sentences are understood to be physical objects, such as sequences of sound waves or sequences of letter-shapes, then they can fit into physical ontology. We might even be able to specify a formal grammar of allowed sentences, and a combinatorial process which only produces physical sentences from that grammar. But meaning per se, like color, is not a physical property as ordinarily understood. (I know I'll get into extra trouble here, because some people are with me on the color qualia being a problem, but believe that causal theories of reference can reduce meaning to a conjunction of known physical properties. However, so far as I can see, intrinsic meaning is a property only of certain constituents of mental states - the meaning of sentences and all other intersubjective signs is not intrinsic and derives from a shared interpretive code - and the correct ontology of meaning is going to be bound up with the correct ontology of consciousness in general.) Anyway, you say it's not obvious to you that "arrangements of matter simply aren't the kind of thing that can be an experience of color". Okay. Let's suppose there is an arrangement of matter in space which is an experience of color. Maybe it's a trillion particles in a certain arrangement executing a certain type of motion. Now, we can think about progressively simpler arrangements and motions of particles - subtracting one particle at a time from the scenario, if necessary... progressively simpler until we get all the way back to empty space. Somewhere in that conceptual progression we stopped having an experience of color there. Can you give me the faintest, slightest hint of where the magic transition occurs - where we go
Your remove-an-atom argument also disproves the existence of many other things, such as heaps of sand. Let's try to communicate through intuition pumps: Suppose I built a machine that could perceive the world, and make inferences about the world, and talk. Then of course (or with some significant probability), the things it directly perceives about the world would seem fundamentally, inextricably different from the things it infers about the world. It would insist that the colors of pixels could not consist solely of electrical impulses - they had to be, in addition, the colors of pixels. Stolen from Dennet: You are not aware of your qualia, only of relationships between your qualia. I could swap and in your conscious experience, and I could swap them in your memories of conscious experience, and you wouldn't be able to tell the difference - your behavior would be the same either way. Two meditations on an optical illusion: I heard, possibly on lesswrong, that in illusions like this one: http://www.2dorks.com/gallery/2007/1011-illusions/12-kanizsatriangle.jpg your edge-detecting neurons fire at both the real and the fake edges. 1. Doesn't that image look exactly like neurons detecting edges between neurons detecting white and neurons detecting like should look like? 2. Doesn't the conflict between a physical universe and conscious experience feel sort of like the conflict between uniform whiteness and edgeness?
My latest comment might clarify a few things. Meanwhile, No-one's telling me that a heap of sand has an "inside". It's a fuzzy concept and the fuzziness doesn't cause any problems because it's just a loose categorization. But the individual consciousness actually exists and is actually distinct from things that aren't it, so in a physical ontology it has to correspond to a hard-edged concept. Consider Cyc. Isn't one of the problems of Cyc that it can't distinguish itself from the world? It can distinguish the Cyc-symbol from other symbols, but only in the same way that it distinguishes any symbol from any other symbol. Any attempt to make it treat the Cyc-symbol really differently requires that the Cyc-symbol gets special treatment on the algorithmic level. In other words, so long as we talk simply about computation, there is nothing at all to inherently make an AI insist that its "experience" can't be made of physical entities. It's just a matter of ontological presuppositions. As I've attempted to clarify in the new comment, my problem is not with subsuming consciousness into physics per se, it is specifically with subsuming consciousness into a particular physical ontology, because that ontology does not contain something as basic as perceived color, either fundamentally or combinatorially. To consider that judgement credible, you must believe that there is an epistemic faculty whereby you can tell that color is actually there. Which leads me to your next remark-- --and so obviously I'm going to object to the assumption that I'm not aware of my qualia. If you performed the swap as described, I wouldn't know that it had occurred, but I'd still know that and are there and are real; and I would be able to tell the difference between an ontology in which they exist, and an ontology in which they don't. A neuron is a glob of trillions of atoms doing inconceivably many things at once. You're focusing on a few of the simple differential sub-perceptions which make
Degree-of-existence seems likely to be well-defined and useful, and may play a part in, for example, quantum mechanics. However, my new response to your argument is that, if you're not denying current physics, but just ontologically reorganizing it., then you're vulnerable to the same objection. You can declare something to be Ontologically Fundamental, but it will still mathematically be a heap of sand, and you can still physically remove a grain. We're all in the same boat. 1. Do you think Cyc could not be programmed to treat itself different from others without use of a quantum computer? If not, how can you make inferences about quantum entanglement from facts about our programming. 2. Does Cyc have sensors or something? If it does/did, it seems like it would algorithmically treat raw sensory data as separate from symbols and world-models. Is there anything to inherently prevent it from insisting that? Should we accept our ontological presuppositions at face value? No you wouldn't. People can't tell the difference between ontologies any more then math changes if you print its theorems in a different color. People can tell the difference between different mathematical laws of physics, or different arrangements of stuff within those laws. What you notice is that you have a specific class of gensyms that can't have relations of reduction for other symbols, or something else computational. Facts about ontology are totally orthogonal to facts about things that influence what words you type. My consciousness is a computation based mainly or entirely on regularities the size of a single neuron or bigger, much like the browser I'm typing in is based on regularities the size of a transistor. I wouldn't expect to notice if my images were, really, fundamentally, completely different. I wouldn't expect to notice if something physical happened - the number of ions was cut by a factor of a million and made the opposite charge, but it the functions from impulses to impu
(part 1 of reply) This is why I said (to TheOtherDave, and not in these exact words) that the mental can be identified with the physical only if there is a one-to-one mapping between exact physical states and exact conscious states. If it is merely a many-to-one mapping - many exact physical states correspond to the same conscious state - then that's property dualism. When you say, later on, that your consciousness "is a computation based mainly or entirely on regularities the size of a single neuron or bigger", that implies dualism or eliminativism, depending on whether you accept that qualia exist. Believe what I quoted, and that qualia exist, and you're a dualist; believe what I quoted, and deny that qualia exist (which amounts to saying that consciousness and the whole world of appearance doesn't really exist, even as appearance), and you're an eliminativist. This is because a many-to-one mapping isn't an identity. "Degrees of existence", by the way, only makes sense insofar as it really means "degrees of something else". Existence, like truth, is absolute. My guess that quantum entanglement matters for conscious cognition is an inference from facts about our phenomenology, not facts about our programming. Because I prefer the monistic alternative to the dualistic one, and because the program Cyc is definitely "based on regularities the size of a transistor", I would normally say that Cyc does not and cannot have thoughts, perceptions, beliefs, or other mental properties at all. All those things require consciousness, consciousness is only a property of a physical ontological unity, the computer running Cyc is a causal aggregate of many physical ontological unities, ergo it only has these mentalistic properties because of the imputations of its users, just as the words in a book only have their meanings by convention. When you introduced your original thought-experiment-- --maybe I should have gone right away to the question of whether these "perceptions" a
Since there's a many-to-one mapping between physical states and temperatures, am I a temperature dualist? Would it be any less dualist to define a one-to-one mapping between physical states of glasses of water and really long strings? (You can assume that I insist that temperature and really long strings are real.) [this point has low relevance] It seems like we can cash out the statement "It appears to X that Y" as a fact about an agent X that builds models of the world which have the property Y. It appears to the brain I am talking to that qualia exist. It appears to the brain that is me that qualia exist. Yet this is not any evidence or the existence of qualia. Degrees of existence come from what is almost certainly a harder philosophical problem about which I am very confused. Facts about your phenomenology are facts about your programming! If you can type them into a computer, they must have a physical cause tracing back through your fingers, up a nerve, and through your brain. There is no rule in science that says that large-scale quantum entanglement makes this behavior more or less likely, so there is no evidence for large-scale quantum entanglement. My point is that the evidence for consciousness, that various humans such as myself and you believe that they are conscious, can be cashed out as a statement about computation, and computation and consciousness are orthogonal, so we have no evidence for consciousness. A: "The universe is made out of nothing but love" B: "What are the properties of ontologically fundamental love?" A: "[The equations that define the standard model of quantum mechanics]" B: "I have no evidence to falsify that theory." A: "Or balloons. It could be balloons." B: "What are the properties of ontologically fundamental balloons?" A: "[the standard model of quantum theory expressed using different equations]" B: "There is no evidence that can discriminate between those theories." I'm a reductive materialist for statements -
The ontological status of temperature can be investigated by examining a simple ontology where it can be defined exactly, like an ideal gas in a box where the "atoms" interact only through perfectly elastic collisions. In such a situation, the momentum of an individual atom is an exact property with causal relevance. We can construct all sorts of exact composite properties by algebraically combining the momenta, e.g. "the square of the momentum of atom A minus the square root of the momentum of atom B", which I'll call property Z. But probably we don't want to say that property Z exists, in the way that the momentum-property does. The facts about property Z are really just arithmetic facts, facts about the numbers which happen to be the momenta of atoms A and B, and the other numbers they give rise to when combined. Property Z isn't playing a causal role in the physics, but the momentum property does. Now, what about temperature? It has an exact definition: the average kinetic energy of an atom. But is it like "property" Z, or like the property of momentum? I think one has to say it's like property Z - it is a quantitative construct without causal power. It is true that if we know the temperature, we can often make predictions about the gas. But this predictive power appears to arise from logical relations between constructed meta-properties, and not because "temperature" is a physical cause. It's conceptually much closer than property Z to the level of real causes, but when you say that the temperature caused something, it's ultimately always a shorthand for what really happened. When we apply all this to coarse-grained computational states, and their identification with mental states, I actually find myself making, not the argument that I intended (about many-to-one mappings), but another one, an argument against the validity of such an identification, even if it is conceived dualistically. It's the familiar observation that the mental states become epiphenomena
The central point about the temperature example is that facts about which properties really exist and which are just combinations of others are mostly, if not entirely, epiphenomenal. For instance, we can store the momenta of the particles, or their masses and velocities. There are many invertible functions we could apply to phase space, some of which would keep the calculations simple and some of which would not, but it's very unclear, and for most purposes irrelevant, which is the real one. So when you say that X is/isn't ontologically fundamental, you aren't doing so on the basis of evidence. Causality, if we have the true model of physics, is defined by counterfactuals. If we hold everything else constant and change X to Y, what happens? So if we have a definition of "everything else constant" wrt mental states, we're done. We certainly can construct one wrt temperature (linearly scale the velocities.) What are the other conditions? is a fact about complex arrangements of quarks. Your ability to communicate your phenomology traces backwards through a clear causal path through a series of facts, each of which is totally orthogonal to facts about what is ontologically fundamental. Since your phenomenology, you claim, is a fact about what is ontologically fundamental, it stretches my sense of plausibility that your phenomenology and your ability to communicate your phenomenology are causally unrelated. Non-causal ontological structure is suspicious. but it's not connected! Quantum entanglement is totally disconnected from how we are able to think about and talk about it! Either quantum entanglement is disconnected from consciousness, or consciousness is disconnected from thinking and talking about consciousness. In your scenario, you are proposing a 1-to-1 mapping between the properties of ontologically fundamental experiences and standard quantum mechanics.
(part 2) I'll quote myself: "The appeal to quantum entanglement is meant to make possible an explanation of the ontology of mind revealed by phenomenology, it's not meant to explain how we are subsequently able to think about it and talk about it, though of course it all has to be connected." Earlier in this comment, I gave a very vague sketch of a quantum Cartesian theater which interacts with neighboring quantum systems in the brain, at the apex of the causal chains making up the sensorimotor pathways. The fact that we can talk about all this can be explained in that way. The root of this disagreement is your statement that "Facts about your phenomenology are facts about your programming". Perhaps you're used to identifying phenomenology with talk about appearances, but it refers originally to the appearances themselves. My phenomenology is what I experience, not just what I say about it. It's not even just what I think about it; it's clear that the thought "I am seeing " arises in response to a that exists before and apart from the thought. This doesn't mean ontological structure that has no causal relations; it means ontological structure that isn't made of causality. A causal sequence is a structure that is made of causality. But if the individual elements of the sequence have internal structure, it's going to be ontologically non-causal. A data structure might serve as an example of a non-causal structure. So would a spatially extended arrangement of particles. It's a spatial structure, not a causal structure. Could you revisit this point in the light of what I've now said? What sort of disconnection are you talking about? Let's revisit what this branch of the conversation was about. I was arguing that it's possible to make judgements about the truth of a proposed ontology, just on the basis of a description. I had in mind the judgement that there's no in a world of colorless particles in space; reaching that conclusion should not be a problem. But, sin
So divide the particle velocities by temperature or whatever. How do you tell what's redundant complexity and what's ontologically fundamental? Position or momentum model of quantum mechanics, for instance? What bothers me about your viewpoint is that you are solving the problem that, in your view, some things are epiphenomenal by making an epiphenomenal declaration - the statement that they are not epiphenomenal, but rather, fundamental. Is there anything about your or anyone else's actions that provides evidence for this hypothesis? "genuine" causal relations is much weaker than "ontologically fundamental" relations. Do only pure qualia really exist? Do beliefs, desires, etc. also exist? You can map a set of three quantum states onto a set of {, , } No, it means ontological structure - not structures of things, but the structure of thing's ontology - that doesn't say anything about the things themselves, just about their ontology. A logical/probabilistic one. There is no evidence for a correlation between the statements "These beings have large-scale quantum entanglement" and "These beings think and talk about consciousness" You would have to be saying that to be exactly the same as your character. You're contrasting two views here. One thinks the world is made up of nothing but STUFF, which follows the laws of quantum mechanics. The other thinks the world is made up of nothing but STUFF and EXPERIENCES. If you show them a quantum state, and tell the first guy "the stuff is in this arrangement" and the second guy "the stuff is in this arrangement, and the experiences are in that arrangement", they agree exactly on what happens, except that the second guy thinks that some of the things that happen are not stuff, but experiences. That doesn't seem at all suspicious to you? You are correct. "balloons" refers to balloons, not to quarks. I guess what's going on is that the guy is saying that's what he believes balloons are. But thinking about the meaning o
It's almost a month since we started this discussion, and it's a bit of a struggle to remember what's important and what's incidental. So first, a back-to-basics statement from me. Colors do exist, appearances do exist; that's nonnegotiable. That they do not exist in an ontology of "nothing but particles in space" is also, fundamentally, nonnegotiable. I will engage in debates as to whether this is so, but only because people are so amazingly reluctant to see it, and the implication that their favorite materialistic theories of mind actually involve property dualism, in which color (for example) is tied to a particular structure or behavior of particles in the brain, but can't be identified with it. We aren't like the ancient atomists who only had an informal concept of the world as atoms in a void, we have mathematical theories of physics, so a logical further question is whether these mathematical theories can be interpreted so that some of the entities they posit can be identified with color, with "experiences", and so on. Here I'd say there are two further important facts. First, an experience is a whole and has to be tackled as a whole. Patches of color are just a part of a multi-sensory whole, which in turn is just the sensory aspect of an experience which also has a conceptual element, temporal flow, a cognitive frame locating current events in a larger context, and so on. Any fundamental theory of reality which purports to include consciousness has to include this whole, it can't just talk about atomized sensory qualia. Second, any theory which says that the elementary degrees of freedom in a conscious state correspond to averaged collective physical degrees of freedom will have to involve property dualism. That's because it's a many-to-one mapping (from physical states to conscious states), and a many-to-one mapping can't be an identity. All that is the starting point for my line of thought, which is an attempt to avoid property dualism. I want to have
I will lay down the main thing convincing me that you're correct. Consider the three statements: 1. "there's a large-scale quantum entanglement in the brain" 2. "consciousness is real" 3. "Mitchell Porter says that consciousness is real." Your inference requires that 1 and 2 are correlated. It is non-negotiable that 2 or 3 are correlated. There is no special connection between 1 and 3 that would make them uncorrelated. However, 1 and 3 are both clearly-defined physical statements, and there is no physical mechanism for their correlation. We conclude that they are uncorrelated. We conclude that 1 and 2 are uncorrelated.
(part 1) Temperature is an average. All individual information about the particles is lost, so you can't invert the mapping from exact microphysical state to thermodynamic state. Most of the invertible functions you mention would reduce to one of a handful of non-redundant functions, obfuscated by redundant complexity. Your model of physics has to have some microscopic or elementary non-counterfactual notion of causation for you to use it to calculate these complex macroscopic counterfactuals. Of course in the real world we have quantum mechanics, not the classical ideal gas we were discussing, and your notion of elementary causality in quantum mechanics will depend on your interpretation. But I do insist there's a difference between an elementary, fundamental, microscopic causal relation and a complicated, fuzzy, macroscopic one. A fundamental causal connection, like the dependence of the infinitesimal time evolution of one basic field on the states of other basic fields, is the real thing. As with "existence", it can be hard to say what "causation" is. But whatever it is, and whether or not we can say something informative about its ontological character, if you're using a physical ontology, such fundamental causal relations are the place in your ontology where causality enters the picture and where it is directly instantiated. Then we have composite causalities - dependencies among macroscopic circumstances, which follow logically from the fundamental causal model, and whose physical realization consists of a long chain of elementary causal connections. Elementary and composite causality do have something in common: in both cases, an initial condition A leads to a final condition B. But there is a difference, and we need some way to talk about it - the difference between the elementary situation, where A leads directly to B, and the composite situation, where A "causes" B because A leads directly to A' which leads directly to A'' ... and eventually this chai
(part 2 of reply) See next section. We are talking at cross-purposes here. I am talking about an ontology which is presented explicitly to my conscious understanding. You seem to be talking about ontologies at the level of code - whatever that corresponds to, in a human being. If someone tells me that the universe is made of nothing but love, and I observe that hate exists and that this falsifies their theory, then I've made a judgement about an ontology both at a logical and an empirical level. That's what I was talking about, when I said that if you swapped and , I couldn't detect the swap, but I'd still know empirically that color is real, and I'd still be able to make logical judgements about whether an ontology (like current physical ontology) contains such an entity. Your sentence about gensyms is interesting as a proposition about the computational side of consciousness, but... ... if gensyms only exist on that scale, and if changes like those which you describe make no difference to experience, then you ought to be a dualist, because clearly the experience is not identical to the physical in this scenario. It is instead correlated with certain physical properties at the neuronal scale. They are, but I was actually talking about the difference between colorness/edgeness and neuronness.
A few thoughts in response: * I agree with you that if my experience of red can't be constructed of matter, then my understanding of a sentence also can't be. And I agree with you that we don't have a reliable account of how to construct such things out of matter, and without such an account we can't rule out the possibility that, as you suggest, such an account is simply not possible. I agree with you that this objection to physicalism has been around for a long time. * I agree with you that insofar as we understand vitalism to be an account of how particular arrangements of matter move around, it is a different sort of thing from the kind of "sentientism" you are talking about. That said, I think that's a misrepresentation of historical vitalism; I think when the vitalists talked about elan vital being the difference between living and unliving matter, they were also attributing sentience (though not sapience) to elan vital, as well as simple animation. * I don't equate the experience of red with the tendency to output the word "red" when queried, both in the sense that it's easy for me to imagine being unable to generate that output while continuing to experience red, and in the sense that it's easy for me to imagine a system that outputs the word "red" when queried without having an experience of red. Lexicalization is neither necessary nor sufficient for experience. * I don't equate the experience of red with categorization... it is easy to imagine categorization without experience. It's harder to imagine experience without categorization, though. Categorization might be necessary, but it certainly isn't sufficient, for experience. * Like you, I can't come up with a physical account of sentience. I have little faith in the power of my imagination, though. Put another way: it isn't easy for me to see what one can and can't make out of particles. But I agree with you that any such account would be surprising, and that there is a phenomenon there to expl
Perhaps we are closer to mutual understanding than might have been imagined, then. A crucial point: I wouldn't talk about the mind as something "nonphysical". That's why I said that the problem is with our current physical ontology. The problem is not that we have a model of the world in which events outside our heads are causally connected to events inside our heads via a chain of intermediate events. The problem is that when we try to interpret physics ontologically (and not just operationally), the available frameworks are too sparse and pallid (those are metaphors of course) to produce anything like actual moment-to-moment experience. The dance of particles can produce something isomorphic to sensation and thought, but not identical. Therefore, what we might think of as a dance of particles actually needs to be thought of in some other way. So I'm actually very close in spirit to the reductionist who wants to think of their experience in terms of neurons firing and so forth, except I say it's got to be the other way around. Taken literally, that would mean that we need to learn to think of what we now call neurons firing, as being fundamentally - this - moment-to-moment experience, as is happening to you right now. Except, the physical nature of whole neurons I don't believe plausibly allows such an ontological reinterpretation. If consciousness really is based on mesoscopic-level informational states in neurons, then I'd favor property dualism rather than the reverse monism I just advocated. But I'm going for the existence of a Cartesian theater somewhere in the brain whose physical implementation is based on exact quantum states rather than collective coarse-grained classical ones, quantum states which in our current understanding would look more algebraic than geometric. And the succession of abstract algebraic state transitions in that Cartesian theater is the deracinated mathematical description of what, in reality, is the flow of conscious experience. If
So, getting back to my original question about what your alternate ontology has to offer... If I'm understanding you (which is far from clear), while you are mostly concerned with being ontologically correct rather than operationally useful, you do make a falsifiable neurobiological prediction having something I didn't follow to do with quantum entanglement. Cool. I approve of falsifiable predictions; they are a useful thing that a way of thinking about the world can offer. Anything else?
I think you ought to be more interested in what this shows about the severity of the problem of consciousness. See my remarks to William Sawin, about color and about many-to-one mappings, and how they lead to a choice between this peculiar quantum monism (which is indeed difficult to understand at first encounter), and property dualism. While I like my own ideas (about quantum monads and so forth), the difficulties associated with the usual approaches to consciousness matter in their own right.
(nods) I understand that you do; I have from the beginning of this exchange been trying to move forward from that bald assertion into a clarification of why I ought to be... that is, what benefits there are to be gained from channeling my interest as you recommend. Put another way: let us suppose you're right that there are aspects of consciousness (e.g., subjective experience/qualia) that cannot be adequately explained by mainstream ontology. Suppose further that tomorrow we encounter an entity (an isolated group of geniuses working productively on the problem, or an alien civilization with a different ontological tradition, or spirit beings from another dimension, or Omega, or whatever) that has worked out an ontology that does adequately explain it, using quantum monads or something else, to roughly the same level of refinement and practical implementation that we have worked out our own. What kinds of things would you expect that entity to be capable of that we are incapable of due to the (posited) inability of our ontology to adequately account for subjective experience? Or, to ask the question a different way: suppose we encounter an entity that claims to have worked out such an ontology, but won't show it to us. What properties ought we look for in that entity that provide evidence that their claim is legitimate? The reason I ask is that you seem to concede that behavior can be entirely accounted for without reference to the missing ontological elements. (I may have misunderstood that, in which case I would appreciate clarification.) So I should not expect them to have a superior understanding of behavior that would manifest in various detectable ways. Nor should I expect them to have a superior understanding of physics. I'm not really sure what I should expect them to have a superior understanding of, though, or what capabilities I should expect such an understanding to entail. Surely there ought to be something, if this branch of knowledge is, as you
I don't consider this inability to merely be posited. It's a matter of understanding what you can and can't do with the ontological ingredients provided. You have particles, you have non-positional properties of individual particles, you have the motions of particles, you have changes in the non-positional properties. You have causal relations. You have sets of these entities; you have causal chains built from them; you have higher-order quantitative and logical facts deriving from the elementary facts about configuration and causal relationships. That's basically all you have to work with. An ontology of fields, dynamical geometry, probabilities adds a few twists to this picture, but nothing that changes it fundamentally. So I'm saying there is nothing in this ontology, either fundamental or composite (in a broad sense of composite), which can be identified with - not just correlated with, but identified with - consciousness and its elements. And color offers the clearest and bluntest proof of this. We can keep going over this fact from different angles, but eventually it comes down to seeing that one thing is indeed different from another. 1 is not 0; is not any specific thing that can be found in the ontology of particles. It reduces to pairwise comparative judgments in which ontologically dissimilar basic entities are perceived to indeed be ontologically dissimilar. What are we trying to explain, ultimately? What even gives us something to be explained? It's conscious experience again; the appearance of a world. Our physical theories describe the behavior of a world which is structurally similar to the world of appearance, but which does not have all its properties. We are happy to say that the world of appearance is just causally connected, in a regularity-preserving way, to an external world, and that these problem properties only exist in the "world of appearance". That might permit us to regard the "external world" as explained by our physics. But then we
I understand that you aren't "merely" positing the inability of a set of particles, positions and energy-states to be an experience. I am. I also understand that you consider this a foolish insistence on my part on rejecting the obvious facts of experience. As I've said several times now, repeatedly belaboring that point isn't going to progress this discussion further.
I find this argument irresistably compelling, and would appreciate a post or a private message letting me know what your answer is. I don't have one; it's all I can do here to notice that I am confused.
I think you need to be taken outside and shot... ... ...j/k. It's just that over recent years I've spent quite a long time arguing with people educated principally in philosophy, who hate Dennett and think his version of materialism is absurd (or at least that it's manifestly wrong), and think it's absolutely essential to go around saying things like 'all we know about are correlations between body and mind'. It's sort-of interesting/refreshing for me to arrive here, with a bunch of people who are (I assume) educated principally in computer science (with perhaps a few mathies and physicists), who are almost unanimously Dennett fans, think that functionalism is just blindingly obvious, that 'zombies' are blindingly obviously impossible, that it's blindingly obvious that the 'Systems Reply' is correct, that anything we build capable of passing the (full) Turing Test would have to be conscious etc. The ones who don't 'get it' - that at the core of Dennett's view there's the difficult-to-swallow idea that there isn't a 'fact of the matter' as to whether a being is conscious and if so what it's conscious of - can at least fall back on a Greg Egan-style view of consciousness which is identical insofar as it agrees that the issues above are 'blindingly obvious'. (That's the other thing: the people here have actually read Greg Egan - woohoo.) I can see you have a more in common with the philosopher-types than the locals. And actually, in your interpretation of Dennett I think there's a mistake - one I've seen elsewhere: You think that in abolishing the 'Cartesian theater' he is ipso facto abolishing phenomenal awareness, but this simply doesn't follow. What he's abolishing is the idea that all of the 'bits' of a person's awareness are present 'together' in a single sharply-defined 'moment', such that there are well-defined answers to questions like "am I seeing a moving dot or a static one?" which would resolve the "Orwellian/Stalinesque" dilemma. Even after the Car
I'd just come back as a zombie. That sums it up well. Next up, let's consider other startling possibilities, such as: there isn't a fact of the matter as to whether you're reading this sentence, there isn't a fact of the matter as to whether this planet exists, there isn't a fact of the matter as to whether there is a fact of the matter as to whether a being is conscious...
Yeah but come on... you always-a-fact-of-the-matter-ists have some startling things to think about too, like The Exact Moment When You First Became Conscious, and the Infinitely Precise Line one can draw across the phylogenetic tree demarcating species whose members are (or may be) conscious and those which never are. (Afterthought: Or are you some kind of panpsychist? Then your startling possibilities incude the minds of rocks...)
See, it's not so hard! You just have to take the idea seriously, and stick with it. You might even have a talent for this. And here I was thinking that my labor here was in vain.
I believe Eliezer doesn't agree with that last one, and has talked about building an AI who isn't conscious. Also, consider the following hypothetical: I get really drunk and/or take Ambien and black out at 2 am. I have no conscious experience or memory of the time between 2 am and 3 am, but during that time you have a (loud and drunken) conversation with me. Or maybe in my drunken state I sit at my computer and manage to instant message without being conscious of it, and the person at the other end is convinced I'm human and not a computer program. Counterexample?
Well, I think we can all agree that it's possible for a non-conscious person (or program or whatever) to be mistaken for a conscious being. However, there are several objections I can make to this scenario being considered a counterexample: (1) How do you know you're not conscious? Just because you don't remember it the next day doesn't mean you don't have any awareness at the time. (2) In the Turing test the judge is supposed to be 'on the look-out' for which of its two subjects seems less able to respond adequately to their questions. And one of the subjects is presumed to be a healthy, sober human. So unless you think the judge would be unable to distinguish a drunken, unconscious conversation from a normal, sober one, you would presumably fail the Turing test.
Suppose I write a computer program (such as Second Life or World of Warcraft) that simulates the properties of an imaginary reality. Have I now created new "subjective secondary properties"? After all, in the real world, objects do not have owners and copyability, nor levels of mana or hit points. Is this "duality", then? What about a book that describes an imaginary world? Is it duality because there are only words on the page, and these have no physical correlate to the things described? The reasoning that you're using is an application of the mind projection fallacy. Human brains have built-in pattern recognition for seeing things as "minds", and having volition -- and this notion is itself an example of an imaginary property projected onto reality. The projection doesn't make the projected quality exist in outside reality, it merely exists in the computational model physically represented in the mind that makes the projection tl;dr version: imaginary attributions in a model do not create dualtiy, or else computer programs have qualia equal to those of humans. Since no mysterious duality is required to create computer programs, we need not hypothesize that such is to create human subjective experience.
(My emphases.) You seem to be contradicting yourself there. The mind only exists in the mind?
The intuitive notion of "mind" exists only in the physical manifestation of the mind. Or to put it (perhaps) more clearly: the only reason we think dualism exists is because our (non-dual) brains tell us so. Like beauty, it's in the eye of the beholder. Our judgment of whether something is intelligent or sentient is based on an opaque weighing of various sensory criteria, that tell us whether something is likely to have intentions of its own. We start out as children thinking that almost everything has this intentional quality, and gradually learn the things that don't. It's as if brains have a built-in (at or near birth) "mind detector" circuit that triggers for some things, and not others, and which can be trained to cease seeing certain things as minds. What it doesn't do, is ever fire for something whose motions and innards are fully understood as mechanical - so it doesn't matter how sophisticated AI ever gets, there will still be people who will insist it's neither conscious nor intelligent, simply because their built-in "mind detector" doesn't fire when they look at it. And that's what people are doing when they claim special status for consciousness and qualia: elevating their genetically-biased intuition into the realm of physical law, not unlike people who insist there must be a soul that lives after death... because their "mind detector" refuses to cease firing when someone dies. In short, this intuitive notion of mind gets in the way of developing actual artificial intelligence, and it leads to enormous wastes of time in discussions of dualism. Without the mind detector -- or if the operation of our mind detectors were fully transparent to the rest of our mental processes -- nobody would waste much time on the idea that there's anything non-physical. We'd only get as far as realizing that if there were non-physical things, we'd have no way to know about them. However, since we do have an opaque mind-detector, that's capable of firing for the wind
I would have said, "A bit like philosophers of free will who say that they feel like they could have done something else, and therefore determinism must be false". (:
I upvoted you back to 0 because your comment was thoughtful and well-written, even though I disagree. Yes, I'm in Dennett's camp. Aside from what other commenters have said, think about it like this: I have a novel here. It's made of the letters A-Z as well as punctuation, arranged in a complicated pattern. But, somehow, the novel also talks about a plot and characters and a setting and so forth, even though all there is to the novel is letters and punctuation. The plot and characters don't have some magical separate state of existence: they exist because they're built out of the letters. Same with conscious experience. Right now I'm eating goat cheese and crackers. This experience arises out of the neurons in my brain, and it's intimately tied up with them and the patterns they make. You can't separate it from my past experience and associations and memories (which is Dennett's point about qualia). Of course the experience exists: it's just built out of and associated with a complex pattern of neuron firings in my brain. The experience is not the same as the series of neurons: that would be a category error, just like a character in a book is not the same as the series of letters that make up his description. No property dualism needed. Of course it's difficult to explain this association, because we don't know enough about brain chemistry.
Me too. Me too. I think it's a good illustration, but I can give you 'the standard reply' from the anti-materialist: As a physical object, the novel is just a hunk of matter with funny shaped ink blotches on it. The 'plot' and 'characters' you speak of have a mental character to them: they don't exist outside of some mind apprehending the novel, a mind which actively 'constructs' these things rather than passively 'finding them' somewhere in the matter of the book. So book --> plot is not after all an analogy that helps us understand how a mind can reduce to a pattern of physical matter, because "plot" already presupposes the mind, so any "reduction" would presuppose that the mind is itself reducible. Yeah, I know this is all wrong - but I've learned to make myself "flip" between a materialist and anti-materialist view.
Hmm. Maybe a better analogy is three stones in a field making a triangle. The triangle exists and is formed by the stones, but this doesn't require dualism, just an understanding that relationships and structures exist and are built out of smaller parts. (I know, that's not exact either.)
Or three quarks making a proton.
Earlier you wrote The ontological ingredients, and the ways of combining them, which physics gives you are quite limited. You can make shapes (like your triangle), you can count objects, you can consider their motions and other changes of state, you can average quantitative properties, you can consider causal dependence and counterfactual situations. There might be a few other things you can do. But if you are going to have a mind-brain identity theory, and not property dualism, then something built solely using methods like the ones I just listed has to be the experience. It can't just be "associated with" the experience - that would be dualism. Color is usually mentioned at this point, because it is pretty obvious that no amount of piling up particles, averaging their properties, and engaging in causal and counterfactual analysis, is going to give you redness where there was none, in the simple way that putting three stones in a field really does give you a triangle. If someone proposes that the experience of a certain shade of red is some complicated but purely physical predicate, object, or condition, then from the perspective of orthodox physical ontology, they are proposing a form of strong emergence. (Weak emergence is like the triangle.) And strong emergence is property dualism - it introduces new ontological ingredients. But although color is the standard counterargument - because of its vividness - any sensation, any thought, anything involving a self, anything like the "experience of an object", is just as much unlike anything that can be made from physics in a weakly emergent way. I challenge you to find a single aspect of your experience which you can unproblematically identify with (and not just associate with) some imagined neurochemical correlate. In every case, you will be taking some subjectively manifest reality, and then saying to yourself, "that is really just neurons doing something"; and in every case, physics alone gives you absolutely no
Three rocks in a field aren't a triangle until there's a brain with a concept of 'triangle' that identifies them as such. Photons of a particular wavelength aren't red until there's a brain with a concept of 'red' that identifies them as such. A creature isn't conscious until there's a brain with a concept of 'consciousness' that identifies it as such. Third one's tricky because of the self-reference, but that doesn't make it an exception to the general rule. Concepts are predictive models, a model can't make predictions unless it's running on a computer, brains are the one kind of computer that can be mass produced by unskilled labor. Qualia, to the extent that they can be coherently defined at all, are a matter of software. Software can be translated between hardware platforms, but cannot exist in any useful form in the absence of hardware. And, for the record, the math necessary to fully define a rock is a hell of a lot more complicated than "1+1." Don't dismiss it until you've properly studied it.
It's not just tricky, it's self-contradictory. The mind exists only in the mind, you say? If you really want to try reducing all of this to physics, I'd recommend that you first deliberately try to dispense with terms which have a technological or user-semantic connotation, because no such thing exists in physical ontology. "Computer" and "software" are being used as metaphors here, and a "model" is an intentional concept. Computer science has the concept of a "state machine", which is a little better from a physical standpoint, because it doesn't attach any semantics to the "states". OK, fine, you can do such a translation, and you get e.g. qualia are equivalence classes of state machines. At least your claim has now truly been expressed in terms that do not implicitly exceed physical ontology. But it's still a wrong claim, because it says nothing about the properties that really define qualia, like the "" that we've been talking about in another thread. I don't study rocks, but I study physics every day. I know the mathematics is complicated. What I'm saying is that physics is not mathematics.
So we can set up state machines that behave like people talking about qualia the way you do, and which do so because they have the same internal causal structure as people. Yet that causal structure doesn't have anything to do with the referent of 'redness'. It looks like your obvious premise that redness isn't reducible implies epiphenomenalism. Which is absurd, obviously. Edit: Wow, you (nearly) bite the bullet in this comment! You say: I claim that mental states can be regarded as causes, that they are indeed a shorthand for immensely complicated physical details (and significantly less but still quite a lot complicated computational details), and claim further that they cause a lot of things. For instance, they're a cause of this comment. I claim that the word 'cause' can apply to more than relationships between fundamental particles: for instance, an increase in the central bank interest rate causes a fall in inflation. So, which do you disagree with: that interest rates are causal influences on inflation, or that interest rates and inflation are shorthand for complicated physical details?
No, it just means that plays a causal role in us, which would be played by something else in a simulation of us. There's nothing paradoxical about the idea of an unconscious simulation of consciousness. It might be an ominous or a disconcerting idea, but there's no contradiction. See what I just said to William Sawin about fundamental versus derived causality. These are derived causal relations; really, they are regularities which follow indirectly from large numbers of genuine causal relations. My eccentricity lies in proposing a model where mental states can be fundamental causes and not just derived causes, because the conscious mind is a single fundamental entity - a complex one, that in current language we might call an entangled quantum system in an algebraically very distinctive state, but still a single entity, in a way that a pile of unentangled atoms would not be. Being a single entity means that it can enter directly into whatever fundamental causal relations are responsible for physical dynamics. Being that entity, from the inside, means having the sensations, thoughts, and desires that you do have; described mathematically, that will mean that you are an entity in a particular complicated, formally specified state; and physically, the immediate interactions of that entity would be with neighboring parts of the brain. These interactions cause the qualia, and they convey the "will". That may sound strange, but even if you believe in a mind that is material but non-fundamental, it still has to work like that or else it is causally irrelevant. So when you judge the idea, remember to check whether you're rejecting it for weirdness that your own beliefs already implicitly carry.
So you're taking the existing causal graph, drawing a box around all the interactions that happen inside a brain, and saying that everything inside the box counts as one thing. That's not simplification, it's just bad accountancy.
Where else would it be? I'm saying that a brain is an environment where ideas can do interesting things (like reproducing themselves, mutating, splitting and recombining) comparable to the interesting things that started happening a very long time ago between amino acids and phospholipid membranes and assorted other organic chemicals which eventually resulted in the formation of brains. Any Turing-complete computer is also a sort of environment for ideas. An idea outside an environment capable of supporting it does not do interesting things. It might be dormant, like a virus or bacterial spore, and colonize any less-hostile environment to which it's introduced. It might not. As yet, the only reliable way to distinguish between a dormant idea and a different arrangement of the same parts which does not constitute a dormant idea is to find an environment in which it will do interesting things. For example, if you find a piece of baked clay with some scratch-marks in it, and want to know if they're cuneiform or just random scratches, you could show it to an archaeologist. The archaeologist looks at the tablet and compares it to prior knowledge about cuneiform - that is to say, transfers information about shape and coloration into her brain via the optic nerve and, once inside, drops them into the informational equivalent of a dish of agar. If anything interesting pops up, it's an idea. If not, either it's just noise, or it's an idea that the archaeologist can't figure out. There's no way to definitively prove the absence of potential ideas in a given information-bearing substrate. If these disembodied qualia-properties don't help you make any actionable predictions beyond what physicalism could do, and their presence is unfalsifiable, I can't see any point to this debate. Is it a social-signaling contest of some sort?
Let's go back to your original statement: OK, so according to you, we have concepts existing before and independently of consciousness, and we also have that consciousness is not a property that is objectively present (or else there'd be no need to appeal to the conceptual judgement of a brain, as a necessary cause of consciousness's existence). Both of these have to be true if you are to avoid circularity. The second one already falsifies your account of consciousness. The difference between being conscious and not being conscious is not a matter of convention. It's an internal fact about you which is not affected by whether I am around to express opinions. It sounds like you want the consciousness of a brain to depend on the conceptual judgements of that same brain, which is at least less abjectly dependent on the epistemology of outsiders. But it's still false. If you are conscious, you are conscious regardless of whatever opinions or concepts you have. Your conceptual capacities limit your possible conscious experience, in the sense that you can't consciously identify something as an X if you don't have the concept X, but whether or not you're conscious doesn't depend on how you are using (or misusing) your conceptual faculties at any time. Just to clarify, by consciousness I mean awareness in all forms, not just self-awareness. What I said still applies to self-awareness as well as to awareness in general, but I thought I would make explicit that I'm not just talking about the sense of being a self. Even raw, self-oblivious sensory experience is a form of consciousness. Maybe my very latest comments will clear things up a little. The immediate problem with physicalism is that reality contains qualia and physicalism doesn't. In a reformed physicalism that does contain qualia, they would have causal power.
Ah, so we're arguing over definitions. Let's say you take an organism capable of receiving and interpreting information in the form of light, such as e.g. a ferret with working eyes and a visual cortex. Duplicate it with arbitrary precision, keep one of the copies in a totally lightless box for a few minutes and shine a dazzling but nondamaging spotlight on the other for the same period of time. Then open the box, shut off the spotlight, and show them both a picture. The ferret from the box would see blindingly intense light, gradually fading in to the picture, which would seem bright and vivid. The ferret from the spotlight would see near-total darkness, gradually fading in to the picture, which would seem dull and blurry. Same picture, very different subjective experience, but it's all the result of physiological (mostly neurological) processes that can be adequately explained by physicalism. Does the theory of qualia make independently-verifiable predictions that physicalism cannot? Or, if the predictions are the same, is it somehow simpler to describe mathematically? In the absence of either of those conditions, I am forced to consider the theory of qualia needlessly complex.
What exactly do you take the purpose of an ontology to be? If you have a scientific theory whose predictions hit the limit of accuracy for predicted experience why do you need anything in your ontology beyond the bound variables of the theory?
An ontology is a theory about what's there. The attributes of experience itself, like color, meaning, and even time, have been swept under a carpet variously labeled "mind", "consciousness", or "appearance", while the interior decorators from Hard Science Inc. (formerly trading as the Natural Philosophy Company) did their work. We have lots of streamlined futuristic fittings now, some of them very elegant. But they didn't get rid of the big lump under the carpet. The most they can do is hide it from view.
We don't have access to "what is there". What we have are sensory experiences. Lots of them! Something is generating those experiences and we would like to know what we will experience in the future. So we guess at the interior structure of the experience generator and build models that predict for us what our future experiences will be. When our experiences differ from expected we revise the model (i.e. our ontology). This includes modeling the thing that we are which improves our predictions of our own experiences and our experiences of what other humans say are their experiences. One thing humans report is the experience is seeing color. So we need to explain that. One thing humans report is the experience is self-awareness so we have to explain that etc. You seem to want to reify the sensory experiences themselves just because they look different in our model than in our experience. But the model isn't supposed to look like our experience it is supposed to predict it. You're making a category error. Presumably you know this and think the problem is the categories. But you need to motivate your rejection of the categories. All I want are predictions and I've been getting them, so why should I reject this model? But lots of scientists study these things! Last semester I learned all about auditory and visual perception. There is a lot we don't know which is why they're still working on it.
So we know that whatever is there must include those sensory experiences. They themselves are part of reality. Most models of reality are partial models that implicitly presuppose some untheorized notion of experience in the model-user. Medicine and engineering aren't especially focused on the fact that doctors and engineers encounter the world, like everyone else, through the medium of conscious experience. But there are two types of explanatory enterprise where conscious experience does become explicitly relevant. One is any theory of everything. The other is any science which does take experience as its subject matter. In the latter case, scientists will explicitly theorize about the nature of experience and its relationship to other things. In the former case, a theory of everything must take a stand on everything, including consciousness, even if only to say "it's made of atoms, like everything else". So some part of these models is supposed to look like experience. However, as I have been saying elsewhere, nothing in physical ontology looks like an experience; and the sciences of consciousness so far just construct correlations between "physics" (i.e. matter) and experience. But they must eventually address the question of what an experience is.
Nice essay! I'm not yet won over by the suggestion in your final paragraph, but it's intriguing.
Phil writes, "Nice essay!" Is there something in Mitchell's essay (comment) that Mitchell has not already said on this site 30 times or did you just like the way he phrased it this time?

Considering the vast number of non-human animals compared to humans, the probability of being a human is vanishingly low. Therefore, chances are that if I could be an animal, I would be.

I do not really think you need an anthropic argument to prove that "you" couldn't be an animal; it is more a matter of definition, i.e. by definition you are not an animal. For example, there is no anthropic reason that "I" couldn't have been raised in Alabama, but what would it even mean to say that I could have been raised in Alabama? That somebody ... (read more)

Considering the vast number of non-human animals compared to humans, the probability of being a human is vanishingly low. Therefore, chances are that if I could be an animal, I would be. This makes a strong anthropic argument that it is impossible for me to be an animal.

You assume that you have equal probability of being any conscious being. The internal subjective experience of humans stands out in its complexity; perhaps more complex subjective experiences have higher weight for some reason.


Anthropic reasoning is what leads people to believe in miracles. Rare events have a high probability of occurring if the number of observations is large enough. But whoever that rare event happens to will feel like it couldn't have just happened by chance, because the odds of it happening to them was so large.

If you wait until the event occurs, and then start treating it as a random event from a single trial, forming your hypothesis after seeing the data, you'll make inferential errors.

Imagine that there are balls in an urn, labeled with numbers 1, 2,.... (read more)

What you have labeled anthropic reasoning is actually straight-up Bayesian reasoning. Wikipedia has an article on the problem, but only discusses the Bayesian approach briefly and with no depth. Jaynes also talks about it early in PT:LOS. In any event, to see the logic of the math, just write down the likelihood function and any reasonable prior.
I suggest reading Radford Neal.
Yes, I've read that paper, and disagree with much of it. Perhaps I'll take the time to explain my reasoning sometime soon

The phrase "for me to be an animal" may sound nonsensical, but "why am I me, rather than an animal?" is not obviously sillier than "why am I me, rather than a person from the far future?".

Agreed - they are both equally silly. The only answer I can think of is 'How do you know you are not?" If you had, in fact, been turned into an animal, and an animal into you, what differences would you expect to see in the world?

When I looked down, I'd see fur or something instead of my manly abs.

Bats, with sensory systems so completely different from those of humans, must have exotic bat qualia that we could never imagine. (...) ...we still have no idea what it's like to feel a subjective echolocation quale.

(Excuse me for being off topic)

Reductionism is true; if we really know everything about a bat brain, bat quale would be included in the package. Imagine a posthuman that is able to model a bat's brain and sensory modalities on a neural level, in its own mind. There is no way it'd find anything missing about the bat; there is no way it'd comp... (read more)

"Very bad" compared to what? We are brilliant at modelling minds relative to our ability for abstract reasoning, mathematics and, say, repeating a list of 8 items we were just told in reverse order.
Trying to imagine neurons and simulating firings by doing mental arithmetic still seems to be far-fetched, which is the kind of modeling I meant.

Considering the vast number of non-elephant animals in the world, the probability of being an elephant is extremely low.

[Edited, because it was wrong.]

The doomsday argument is,

O(X) = random human me observes some condition already satisfied for X humans

pt(X) = P(there will be X humans total over the course of time)

pt(2X | O(X|2)) / pt(2X) < pt(X | O(X/2)) / pt(X)

This is true if your observation O(X) is, "X people lived before I was born", or, "There are X other people alive in my lifetime".

But if your observation O(X) is "I am the Xth human", then you get

pt(2X | O(X|2)) / pt(2X) = pt(X | O(X/2)) / pt(X)

and the Doomsday argument fails.

So which definition of O(X) is the right observation to use?

The anthropic principle is contingent on no additional information. For example, if sentient life exists elsewhere in the universe, your odds of being a human are vanishingly small. This would suggest sentient life does not exist elsewhere in the universe. However, given that there appears to be nothing so special about earth that it wouldn't reoccur many times among trillions and trillions of stars, we can still conclude that sentient life does likely exist elsewhere in the universe.

Similarly, in this context, the fact that animals have brains that are r... (read more)

That's not how the anthropic principle works. The anthropic principle lets you compute the posterior probability of some value V of the world, given an observable W. The observable can be the number of humans who have lived so far, and the value V can be the number of humans who will ever live. The probability of a V where 100W < V is smaller than the probability of a V only a few times larger than W. It's unclear if you get to count transhumans and AIs in V, which is the same problem Yvain is raising here about whether to include bats and ants in the distribution. You can't conclude that there aren't other planets with life because you ended up here, because the probability of different values of V doesn't depend on the observable W. There's no obvious reason why P(there are 9999 other planets with life | I'm on this planet here with life) / P(there are 9999 other planets with life) would be different than P(there are 0 other planets with life | I'm on this planet with life) / P(there are 0 other planets with life). (I divided by the priors to show that the anthropic principle takes effect only in the conditional probability; having a different prior probability is not an anthropic effect.) Disclaimer: I'm a little drunk. I'm troubled now that this formulation doesn't seem to work, because it relies on saying "P(fraction of all humans who have lived so far is < X)". It doesn't work if you replace the "<" with an "=". But the observable has an "=". BTW, outside transhumanist circles, the anthropic principle is usually used to justify having a universe fine-tuned for life, not to figure out where you stand in time, or whether life will go extinct.
This argument could have been made by any intelligent being, at any point in history, and up to 1500AD or so we have strong evidence that it was wrong every time. If this is the main use of the anthropic argument, then I think we have to conclude that the anthropic argument is wrong and useless. I would be interested in hearing examples of applications of the anthropic argument which are not vulnerable to the "depending on your reference class you get results that are either completely bogus or, in the best case, unverifiable" counterargument. (I don't mean to pick on you specifically; lots of commentors seem to have made the above claim, and yours was simply the most well-explained.)
First, "the anthropic argument" usually refers to the argument that the universe has physical constants and other initial conditions favorable to life, because if it didn't, we wouldn't be here arguing about it. Second, what you say is true, but someone making the argument already knows this. The anthropic argument says that "people before 1500AD" is clearly not a random sample, but "you, the person now conscious" is a random sample drawn from all of history, although a sample of very small size. You can dismiss anthropic reasoning along those lines for having too small a sample size, without dismissing the anthropic argument.
Thank you for saying this. I agree. Since at least the time I made this comment, I have tentatively concluded that anthropic reasoning is useless (i.e. necessarily uninformative), and am looking for a counterexample.
Best time to do anthropic reasoning. Save the sane reasoning for when you're sober! ;)
True, assuming sentient life is common enough. Not true. This is like saying that if you roll a million sided die and get 362,853 then the die must have been fixed because the chance of getting 362,853 is 1-in-a-million!
Were that appropriate, the same mechanism would also defeat the reasoning in this post. While I agree with your ultimate conclusion, using solely the anthropic principle and no additional information, I believe you are compelled to conclude extraterrestrial life does not exist.
I disagree. There is a natural category (sentience, reflectivity, etc.) that picks out humans over other Earthly animals and leads to a more-than-max-entropy prior for humans being more anthropically special*; this is not the case for either 362,853 or Earth. * If you accept anthropic reasoning at all, that is. I'm sort of playing devil's advocate in this comment; this post mostly just pushes me further towards biting the bullet of UDT/collapsing epistemology to decision theory.
With an acknowledgement that on topics of this difficulty I don't expect to be right a supermajority of the time I have to disagree both on what "I am human" tells me about other beings and on what extra information tells me. Given no additional information, noticing that I am a human increases the probability that there is sentient life elsewhere in the universe (it at least shows that sentient life is possible). It is a mistake to draw any conclusions from p(a randomly chosen sentient being is a human | there are sentient beings elsewhere in the universe). If both you and aliens exist then you and aliens exist. Knowing that you happen to be you instead of an alien isn't particularly significant. As for extra information... well, the fact that we can't see any evidence of interstellar civilisations eating stars or otherwise messing up the place does provide weak-to-moderate evidence that intelligent life is hard to come by depending on how likely it is for intelligent life to progress that far. In that case anthropic reasoning would help explain how we could come to exist given that life was improbable. We would be an unimaginably improbable freak and all other similarly improbable freaks would be off in other Everett branches.
Assume three possible worlds, for simplicity: A: 1 billion humans. No ETs. B: 1 billion humans, 1 million ETs C: 1 billion humans, 1 billion billion billion ETs. If I am using the anthropic principle and the observation that I am human, these together provide very strong evidence that we are in either world one or world two, with a slightly stronger nudge towards world one. Where we end up after this observation depends on our priors. I agree fully that making additional inferences, such as the probability of other sentient beings increasing due to our own existence, or when we look at the size of the universe, the odds of being alone decrease, affects the end probability. The inference I described may be unduly restricted, but that is my exact point. The original post made an anthropic inference in isolation - it simply used the fact that there are more animals than humans, and the author is a human, to infer that animals do not have experiences. The form of the argument would not have changed significantly were it used to argue that rocks lack experience. Thus, while the argument is legitimate, it is easily overwhelmed by additional evidence, such as the fact that humans and animals have somewhat similar brains. That was my point: the anthropic principle is easily swamped by additional evidence (as in the ET issue) and so is being overextended here.

You're saying, "I rolled a die. The die came up 1. Therefore, this die probably has a small number of sides."

But "human" is just "what we are". Humans are not "species number 1". So your logic is really like saying, "I rolled a die. The die landed with some symbol on top. Therefore, it probably has a small number of sides."

If the die is small enough for you to hold in one hand, and the symbol covers only side yet is large enough to easily read with typical human visual acuity, based on the laws of geometry it would be safe to assume that the die has fewer than about 100 sides, yeah.
I think this part of the analogy equates to our ability to observe the rest of the universe over billion-year time frames and its apparent lack of alien life forms. The Doomsday argument is part observational, after all.
If the various species of ET are such that no particular species makes up the bulk of sentient life, then there's no reason to be surprised at belonging to one species rather than another. You had to be some species, and human is just as likely as klingon or wookie.
And here is where we are in simple disagreement. I say that knowing that I am human tells me very little about the configuration of matter in a different galaxy. Things that it does not tell me include, but are not limited to, "is the matter arranged in the form of a childlike humanoid, maybe green or grey. Probably with a big head and that can do complex thinking?" I claim (and, again, it is a complex topic so I wouldn't bet on myself at odds of more than one gives you, say, 20) that this argument isn't weak evidence that is easily overwhelmed. It is not evidence at all.

You can't be a toaster, because toasters don't have any awareness at all. As a philosophical ponderer, you likewise can't be an animal lower than H. Sap. If you were, you wouldn't be able to reflect on it.

Re: "If the doomsday argument is sufficient to prove that some catastrophe is preventing me from being one of a trillion spacefaring citizens of the colonized galaxy, this argument hints that something is preventing me from being one of a trillion bats or birds or insects."

The doomsday argument? It seems like a dubious premise.

The following is copypasted from some stream-of-consciousness-style writing from my own experimental wave/blog/journal, so it may be kinda messy. If this gets upvoted, I might take the time to clean it up some more. The first part of this is entirely skippable.

(skippable part starts here)

I just read this LW post. I think the whole argument is silly. But I still haven't figured out how to explain the reasons clearly enough to post a comment about it. I'll try to write about it here.

Some people have posted objections to it in the comments, but so far no... (read more)

Peter Thiel uses similar arguments about investing for the future: if it all goes bust, then your investments don't matter either way, but if it turns out okay, then you win big. No down side vs. huge up side: invest.

Another reason I wouldn't put any stock in the idea that animals aren't conscious is that the complexity cost of a model in we are and they (other animals with complex brains) are not is many bits of information. 20 bits gives a prior probability factor of 10^-6 (2^-20). I'd say that would outweigh the larger # of animals, even if you were to include the animals in the reference class.

The complexity cost of a model in which any brain is conscious is enormous. Keep in mind that a model with consciousness has to 'output' qualia, concepts, thoughts... which (as far as we can tell) correspond to complex brain patterns which are physically unique to each single brain. That is, unless the physical implementation of subjective experience is much simpler than we think it is.

If there is an infinite number of conscious minds, how do the anthropic probability arguments work out?

In a big universe, there are infinitely many beings like us.

An (insufficiently well designed) AI might use this kind of reasoning to conclude that it's not like anything to be a human. (I mentioned this as an AI risk at the bottom of this SL4 post.)

Re: "Therefore, chances are that if I could be an animal, I would be. This makes a strong anthropic argument that it is impossible for me to be an animal."

Your priors say that you are a human. It is evidence that is hard to ignore, no matter how unlikely may seem. Concrete evidence that you are part of a minority trumps the idea that being part of a minority is statistically unlikely.

Since this is true regardless of whether or not it "feels like something" to be a bat, the mere evidence of your existence as a human doesn't allow you to draw conclusions about Nagel's bat speculations.

This argument would have to apply to people who were born completely blind, or completely deaf. Just imagine that all humans are echolocation-deaf/blind.

If you randomly selected from the set of all sentient beings throughout time and space, the odds are vanishingly low that you would get the Little Prince as well.

Suppose that he ponders his situation, and concludes that if there were places in the universe where many, many humans can coexist, then it would be unlikely that he would find himself living alone on an asteroid.

If we accept for the sake of an argument that he exists, then someone must be the Little Prince, and be doomed to make incorrect inferences about the representativeness of their situatio... (read more)

The logical conclusion of that version of the anthropic principle is that the universe contains infinitely many copies of us.

Yvain, you are an animal.

Can we dismiss all anthropic reasoning by saying that probability is meaningless for singular events? That is, the only way to obtain probability is from statistics, and I cannot run repeated experiments of when, where, and as what I exist.

That's entirely contrary to the Bayesian program that this site broadly endorses: throwing out the subjective probability baby with the anthropic bath water, as it were.
What, really? Wait, what!? Uh. 1. Could you please answer my question directly in the form of "yes/no, because"? 2. Do you mean by subjective probability the fact(?) that probability is about the map and not the territory? 3. If yes, what does it have to do with anthropics? 4. If yes, what! Contrary?? I learned about it here! 5. If no, I'm completely confused. Also, dear reader, vote parent up or down to tell me whether he's correct about you.
No, probability is not "meaningless for singular events". We can meaningfully discuss, in Bayesian terms, the probability of drawing a red ball from a jar, even if that jar will be destroyed after the single draw. The probabilities are assessments about our state of knowledge. Therefore no, we cannot dismiss all anthropic reasoning for the reasons you suggested. If you got "probability is meaningless for singular events" from what you learned here, either you are confused, or I am. (Possibly both.)
No, because it isn't isn't meaningless. No, you can get it from mathematics. Even basic arithmetic. Infinite series of events, on the other hand, those are hard to come by. I dismiss many examples of (bad) anthropic reasoning because they assume that that the probability of their subjective experience is what you get if you draw a random head out of a jar of all things that meet some criteria of self awareness. Kind of. Read Probability is subjectively objective The frequentist dogma was the 'contrary' part, not the 'maps/territory' stuff. Probability doesn't come from statistics and definitely applies to single events.
Statistics is, of course, one source of knowledge we can usefully apply in calculating probabilities.
It seems to me that the disagreement here is because you're looking at different parts of the problem. It might well be said that you can't have a well-calibrated prior for an event that never happened before, if that entails that you actually don't know anything about it (and that might be what you're thinking of). On the other hand, you should be able to assign a probability for any event, even if the number mostly represents your ignorance.

Instead of showing that non-human animals are unconscious, anthropic reasoning may show that such animals are conscious if we are not ourselves soon doomed to extinction. Expanding the class of observers to include such animals makes it less surprising that we find ourselves living at this comparatively early stage of human evolution, since "we" refers to conscious rather than to merely human beings.

This argument assumes that most non-human animals will soon go extinct. But this assumption makes sense under many of the possible scenarios involving human survival.