Mentioned in

The Presumptuous Philosopher's Presumptuous Friend

11CannibalSmith

1CronoDAS

0Aurini

0Tyrrell_McAllister

1CannibalSmith

0[anonymous]

8ata

2PlaidX

0Nubulous

0PlaidX

0Nubulous

0PlaidX

0Nubulous

5PlaidX

0Nubulous

0PlaidX

0Nubulous

0PlaidX

5Chris_Leong

5wedrifid

3Jonathan_Graehl

2wedrifid

5taw

2CannibalSmith

2PlaidX

0taw

2Vladimir_Nesov

1SilasBarta

1wedrifid

0taw

2Jonathan_Graehl

1Vladimir_Nesov

2Jonathan_Graehl

1Vladimir_Nesov

1Tyrrell_McAllister

1brianm

0Tyrrell_McAllister

2brianm

1wedrifid

-9taw

5wedrifid

0Jonathan_Graehl

0taw

1wedrifid

-2SilasBarta

-2SilasBarta

0wedrifid

0SilasBarta

0wedrifid

0SilasBarta

0wedrifid

0Jonii

0wedrifid

0Jonathan_Graehl

0Jack

1ata

0Jack

1ata

0Jack

0taw

2Jack

0PlaidX

0taw

0wedrifid

2taw

1wedrifid

0taw

3wedrifid

2SilasBarta

3jimmy

1SilasBarta

0jimmy

0SilasBarta

0PlaidX

0SilasBarta

0PlaidX

1SilasBarta

2PlaidX

1SilasBarta

0PlaidX

0wedrifid

0Psychohistorian

New Comment

Some comments are truncated due to high volume. (⌘F to expand all)

1

/me is confused by this picture

0

"Well played, clerks... well played." slow clap ~Leonardo Leonardo

0

Whose face is the smug one?

1

http://images.google.com/images?q=smug
Second result.

0[anonymous]

Or, in case it ever stops being the second result (which, actually, it has): http://rndm.files.wordpress.com/2006/11/smug404.jpg

I don't think this requires anthropic reasoning.

Here is a variation on the story:

...One day, you and the presumptuous philosopher are walking along, arguing about the size of the universe, when suddenly Omega jumps out from behind a bush and knocks you both out with a crowbar. While you're unconscious, she builds a hotel with 1,000,001 rooms. Then she makes a million copies of both of you, sticks them all in rooms, and destroys the originals.

You wake up in a hotel room, in bed with the presumptuous philosopher, with a note on the table from Omega, explainin

2

I thought of this, but then, in the other direction, is the problem non-isomorphic to the original presumptuous philosopher problem? If so, why?
Is it because I used hotels instead of universes? Is it because the existence of both hotels has probability 100% instead of probability 50%? Is it some other thing?

0

The most obvious difference is that the original problem involved the smaller or the larger set of people whereas this one uses the smaller and the larger.

0

Ah, so the difference isn't that I used hotels instead of universes, it's that I used hotels instead of POSSIBLE hotels. In other words, your likelihood of being in a hotel depends on the number of "you"s in the hotel, but your likelihood of being in a possible hotel does not, is that what you're saying?
Unless the number of "you"s is zero. Then it clearly does depend on the number. Isn't this just packing and unpacking?

0

You're reading a little more into what I said than was actually there. I was just remarking on the change of dependence between the parts of the problem, without having thought through what the consequences would be.
Now that I have thought it through, I agree with the presumptuous philosopher in this case. However I don't agree with him about the size of the universe. The difference being that in the hotel case we want a subjective probability, whereas in the universe case we want an objective one. Subjectively, there's a very high probability of finding yourself in a big universe/hotel. But subjective probabilities are over subjective universes, and there are very very many subjective large universes for the one objective large universe, so a very high subjective probability of finding yourself in a large universe doesn't imply a large objective probability of being found in one.

0

I don't understand what you mean by subjective and objective probabilities. Would you still agree with the philosopher in my problem if omega flipped a coin (or looked at binary digit 5000 of pi) and then built the small hotel OR the big hotel?

0

I don't know what I meant either. I remember it making perfect sense at the time, but that was after 35 hours without sleep, so.....
The answer to the second part is no, I would expect a 50:50 chance in that case.
In case you were thinking of this as a counterexample, I also expect a 50:50 chance in all the cases there from B onwards. The claim that the probabilities are unchanged by the coin toss is wrong, since the coin toss changes the number of participants, and we already accepted that the number of participants was a factor in the probability when we assigned the 99% probability in the first place.

5

So, if omega picks a number from 1 to 3, and depending on the result makes:
A. a hotel with a million rooms
B. a hotel with one room
C. a pile of flaming tires
you'd say that a person has a 50% chance of finding themselves in situation A or B, but a 0% chance of being in C?
Why does the number of people only matter when the number of people is zero? Doesn't that strike you as suspicious?

0

When we speak of a subjective probability in a person-multiplying experiment such as this, we (or at least, I) mean "The outcome ratio experienced by a person who was randomly chosen from the resulting population of the experiment, then was used as the seed for an identical experiment, then was randomly chosen from the resulting population, then was used as the seed.... and so forth, ad infinitum".
I'm not confident that we can speak of having probabilities in problems which can't in theory be cast in this form.
In other words, the probability is along a path. When you look at the problem this way, it throws some light on why there are two different arguable values for the probability. If you look back along the path, ("what ratio will our person have experienced") the answer in your experiment is 1000000:1. If you look forward along the path, ("what ratio will our person experience") the answer is 1:1 (in the flaming-tires case there's no path, so there's no probability).

0

But again I must ask, on the going-forward basis, why is the number of people in each world irrelevant? I grant you that the WORLD splits into even thirds, but the people in it don't, they split 1000000 / 1 / 0. Where are you getting 1 / 1 / 0?

0

Because if you agree that the correct way to measure the probability is as the occurrence ratio along the path, the degree of splitting is only significant to the extent that it affects the occurrence ratio, which in this case it doesn't. The coin toss chooses equiprobably which hotel comes next, then it's on to the next coin toss to equiprobably choose which hotel comes next, and so forth. So each path has on average equal numbers of each hotel, going forwards.

0

But you're not a hotel, you're an observer. Why does the number of hotels matter but not the number of observers? If the tire fire is replaced with an empty hotel, you still can't end up in it.
It seems like your function for ending up in a future, based on the number of observers in that future, goes as follows:
If there's zero, the prior likelihood gets multiplied by zero.
If there's one, the prior likelihood gets multiplied by one.
If there's more than one, the prior likelihood still only gets multiplied by one.
This function seems more complicated than just multiplying the prior probability by the number of observers, which is what I do. My reasoning is, even on a going forward basis, if there's a line connecting me to a world with one future self, and no line connecting me to a world without a future self, there must be 14 lines connecting me to a future with 14 future selves.
Is there some reason to prefer your going-forward interpretation over mine, despite the fact that mine is simpler and agrees with the going-backwards perspective?

One difference between this and universes is that you can't be in two hotels, but you might be able to exist in two different models of the universe.

You run out of the room to find yourself in a huge, ten thousand story attrium, filled with throngs of yourselves and smug looking presumptuous philosophers.

One of the other copies just got $10 bucks, you lost nothing. Nice work bluffing your presumptuous friend and pumping his ego for (a chance at) cash. I just hope you think things through a bit more thoroughly if you have to lay cash on the line. Or that you have good reason to be valuing the outcome of the one copy equal to that of the million in the other hotel.

This is a trivial problem that need n...

3

I wouldn't want to endure a million smug "told you so" smiles for $10. Think dust specks.

2

And miss watching 1,000,000 presumptuous philosophers flummoxed when the only response they get is a look of condescending superiority? I don't think so!

I wonder... could we please use Omega less often unless absolutely required? (and if absolutely required it strongly suggests something is wrong with the story anyway)

2

Not a chance.

2

I used omega because it makes things tidier. I think it's important for a thought experiment to be tidy, but not very important for it to be realistic.
Also it's funny.

0

My problem is experiments like Newcomb in which Omega is used to break causality, and make absolutely no sense; and experiments like this which are really in every way equivalent to "being moved to a random room", look too similar.

2

It doesn't break causality. Newcomb's problem (especially if you move the victim to a deterministic substrate) can very well be set up in the real world. It just can't be currently done because of limitations of technology.

1

Well, what do you mean by "setting it up in the real world"? There are certainly versions that can be done on computer (and I'm not sure if you were counting these, so don't take this as a criticism).
-Write an algorithm A1 for picking whether to one-box or two-box on the problem.
-Write an algorithm A2 for predicting whether a given algorithm will one-box or two-box, and then fill the box as per Omega.
-Run a program in which A2 acts on A1, and then A1 runs, and find A1's payoff.
Eliezer_Yudkowsky even claimed that this implementation of Newcomb's problem makes it even clearer why you should use Timeless Decision Theory.

1

Omega doesn't break causality in Newcomb. It is merely a chain of causality which is entirely predictable.

0

Yes it does. It makes decision in the past that depends on your decision in the future, and your decision in the future can assume Omega has already decided in the past. That's a causality loop.
Newcomb is a completely bogus problem.

2

Is the taw-on-Newcomb downvoting happening because he's speaking against what's considered settled fact?

1

It's only a loop in imaginary Platonia. In the real world, laws of physics don't notice that there's a "loop". One way to see the problem is as a situation that demonstrates failure to adequately account for the real world with the semantics usually employed to think about it.

2

Too opaque.

1

Alas, yes. I'm working on that.

1

If it's a loop in Platonia, then all causation happens in Platonia. If any causation can be said to happen in the real world, then real causation is happening backwards in time in the Newcomb scenario.
But I, for one, have no problem with that. All causal processes observed so far have run in the same temporal direction. But there's no reason to rule out a priori the possibility of exceptions.
ETA: Nor to rule out loops.

1

I don't see why Newcombe's paradox breaks causality - it seems more accurate to say that both events are caused by an earlier cause: your predisposition to choose a particular way. Both Omega's prediction and your action are caused by this predisposition, meaning Omega's prediction is merely correlated with, not a cause of, your choice.

0

It's commonplace for an event A to cause an event B, with both sharing a third antecedent cause C. (The bullet's firing causes the prisoner to die, but the finger's pulling of the trigger causes both.) Newcomb's scenario has the added wrinkle that event B also causes event A. Nonetheless, both still have the antecedent cause C that you describe.
All of this only makes sense under the right analysis of causation. In this case, the right analysis is a manipulationist one, such as that given by Judea Pearl.

2

I don't see how. Omega doesn't make the prediction because you made the action - he makes it because he can predict that a person of a particular mental configuration at time T will make decision A at time T+1. If I were to play the part of Omega, I couldn't achieve perfect prediction, but might be able to achieve, say, 90% by studying what people say they will do on blogs about Newcombe's paradox, and performing observation as to what such people actually do (so long as my decision criteria weren't known to the person I was testing).
Am I violating causality by doing this? Clearly not - my prediction is caused by the blog post and my observations, not by the action. The same thing that causes you to say you'd decide one way is also what causes you to act one way. As I get better and better, nothing changes, nor do I see why something would if I am able to simulate you perfectly, achieving 100% accuracy (some degree of determinism is assumed there, but then it's already in the original thought experiment if we assume literally 100% accuracy).
Assuming I'm understanding it correctly, the same would be true for a manipulationist definition. If we can manipulate your mental state, we'd change both the prediction (assuming Omega factors in this manipulation) and the decision, thus your mental state is a cause of both. However if we could manipulate your action without changing the state that causes it in a way that would affect Omega's prediction, our actions would not change the prediction. In practice, this may be impossible (it requires Omega not to factor in our manipulation, which is contradicted by assuming he is a perfect predictor), but in principle it seems valid.

1

He makes a prediction based on the nearby state of the universe that you model with an accuracy that approaches 1. If your mathematician can't handle that then find a better mathematician.
I shall continue to find Omega useful.
ETA: The part of the Newcomb problem that is actually hard to explain is that I am somehow confident that Omega is being truthful.

-9

0

For a bunch of people with what seems to be a Humean suspicion of metaphysics "causation" sure comes up a lot. If you think that causation is just a psychological projection onto constantly conjoined events then it isn't clear what the paradox here is.

1

There are non-metaphysical treatments of causality. I'm not sure if any particular interpretations are favoured around here, but they build on Bayes and they work. (I have yet to read it, but I've heard good things about Judea Pearl's Causality.)
It's a "psychological projection" inasmuch as probability itself is, but as with probability, that doesn't mean it's never a useful concept, as long as it's understood in the correct light.

0

Sure. But,
1. The way I see causal language being used doesn't suggest to me a demystified understanding of causality.
2. Maybe I'm being dense but it seems to me a non-metaphysical account of causality won't a priori exclude backwards causation and causality loops. In other words, even if we allow some kind of deflated causality that won't mean Newcomb's problem "makes no sense".

1

Oh, I wasn't agreeing with taw on that. Just responding to your association of causation with metaphysics. I don't see Omega breaking any causality, whether in a metaphysical or statistical sense.
As for excluding backwards causation and causality loops -- I'm not sure why we should necessarily want to exclude them, if a given system allows them and they're useful for explaining or predicting anything, even if they go against our more intuitive notions of causality. I was just recently thinking that backwards causality might be a good way to think about Newcomb's problem. (That idea might go down in flames, but I think the point stands that backward/cyclical causality should be allowed if they're found to be useful.)

0

I think we agree down the line.

0

I meant causation in purely physical sense. Disregarding complexity of quantum-ness, Omega can't do that as you get time loops.

2

I don't know what that means. Our most basic physics makes no mention of causation or even objects. There are just quantum fields with future states that can be predicted if you have knowledge of earlier states and the right equations. And no matter what "causation in a purely physical sense" means I have no idea why it prohibits an event at time t1 (Omega's predictions) from necessarily coinciding with an event at t2 (your decision).

0

You can do both this experiment and newcomb without omega, or at least, you can start with a similar, but messier setup and bridge it to the tidy omega version using reasonable steps. But the process is very tedious.

0

Past discussions indicate quite conclusively that Newcomb is completely unmathematizable as a paradox. Every mathematization becomes trivial one was or the other, and resolves causality loop caused by Omega.
If problems with Omega can be pathological like that, it's a good argument to avoid using Omega unless absolutely necessary (in which case you can rethink if problem is even well stated).

0

I would be shocked if it didn't. It's a trivial problem.

2

Trivial how? Depending on mathematization it collapses to either one-boxing, or two-boxing, depending on how we break the causality loop.
If you decide first, trivially one-box. If Omega decides first, trivially two-box. If you have causality loop, your problem doesn't make any sense.

1

No it doesn't. It suggests that care is being taken to remove irrelevant details and prevent irritating technicalities.

0

Irritating technicalities like causality?

While you're unconscious, she builds two hotels, one with a million rooms, and one with just one room. Then she makes a million copies of both of you, sticks them all in rooms, and destroys the originals.

*I feel... thin. Sort of stretched, like... butter scraped over too much bread.*

Why do we spend so much time thinking about how to reason on problems in which

a) you know what's going on while you're not conscious, and

b) you take at face value information fed to you by a hostile entity?

3

Because it's much simpler that way, and you need to be able to handle trivial cases before you can deal with more complicated ones.
Besides, what is hostile about making a million copies of you? I'd take getting knocked out for that, as long as the copies don't all have brain damage for it.

1

Okay, fair point. It is indeed important to start from simple cases. I guess I didn't say what I really meant there.
My real concern is this: posters are trying to develop the limits of e.g. anthropic reasoning. Anthropic reasoning takes the form of, "I observe that I exist. Therefore, it follows that..."
But then to attack that problem, they posit scenarios of a completely different form: "I have been fed solid evidence from elsewhere that {x, y, and z} and then placed in {specific scenario}. Then I observe E. What should I infer?"
That does not generalize to anthropic reasoning: it's just reasoning from arbitrarily selected premises.

0

I figured that wasn't your real objection, but I guessed wrong about what it was.
I figured you were going for something like "you need to include sufficient information so that we know we're not positing an impossible world", which is a fair point, since, for example, at first glance newcombs problem appears to violate causality.
Are you suggesting that we deal with more general problems where we know even less, or are you just saying that these problems aren't even related to anthropic reasoning?

0

This. This is what I'm saying.
These posts I'm referring to start out with "Assume you're in a situation where [...]. And you know that that's the situation. Then what you can you infer from evidence E?"
But when you do that, there's nothing anthropic about that -- it's just a usual logical puzzle, unrelated to reasoning about what you can know from your existence in this universe.

0

Do you consider the original presumptuous philosopher problem to involve anthropic reasoning? What is it that's required to be undefined for reasoning to be anthropic?

0

Anthropic reasoning is any reasoning based on the fact that you (believe you) exist, and any condition necessary for you to reach that state, including suppositions about what such conditions include. It can be supplemented by observations of the world as it is.
In this problem, most of the problems that purport to use anthropic reasoning, and the original presumptuous philosopher problems, they are just reasoning from arbitrary givens, which don't even generalize to anthropic reasoning. Each time, someone is able to point out a problem isomorphic to the one given, but lacking a characteristically anthropic component to the reasoning.
Anthropic reasoning is simply not the same as "hey, what if someone did this to you, where these things had this frequency, what would you conclude upon seeing this?" That's just a normal inference problem.
Just to show that I'm being reasonable, here is what I would consider a real case of anthropic reasoning.
"I notice that I exist. The noticer seems to be the same as that which exists. So, whatever the computational process is for generating my observations must either permit self-reflection, or the thing I notice existing isn't really the same thing having these thoughts."

0

To me, that just indicates that anthropic reasoning is valid, or at least that what we're calling anthropic reasoning is valid.

1

Well, that just means that you're doing ordinary reasoning, of which anthropic reasoning is a subset. It does not follow that this (and topics like it) is anthropic reasoning. And no, you don't get to define words however you like: the term "anthropic reasoning" is supposed to carve out a natural category in conceptspace, yet when you use it to mean "any reasoning from arbitrary premises", you're making the term less helpful.

2

If it doesn't carve out such a category, maybe that's because it's a malformed concept, not because we're using it wrong. Off the top of my head, I see no reason why the existence of the observer should be a special data point that needs to be fed into the data processing system in a special way.

1

Strangely enough, that's actually pretty close to what I believe -- see my comment here.
So, despite all this arguing, we seem to have almost the same view!
Still, given that it's a malformed concept, you still need to remain as faithful as possible to what it purports to mean, or at least note that your example can be converted into a clearly non-anthropic one without loss of generality.

0

Fair enough!

0

Which is interesting enough, so long as I only have to write trivial replies and not waste time writing up the trivial scenarios! (You make a good point.)

This entire theoretical framework is based on the assumption that "she makes a million copies of both of you, sticks them all in rooms, and destroys the originals" is meaningfully possible, which it may not be, and that it would result in a "you" that is somehow continuous, which is not clear, and may not be experimentally verifiable.

And of course, if you ever encountered an Omega hypothetical in real life, you'd decide that "He's lying" has P~=1. Perhaps that's why Omega keeps getting used; all Omega hypotheticals have that property in common, I believe.

One day, you and the presumptuous philosopher are walking along, arguing about the size of the universe, when suddenly Omega jumps out from behind a bush and knocks you both out with a crowbar. While you're unconscious, she builds two hotels, one with a million rooms, and one with just one room. Then she makes a million copies of both of you, sticks them all in rooms, and destroys the originals.

You wake up in a hotel room, in bed with the presumptuous philosopher, with a note on the table from Omega, explaining what she's done.

"Which hotel are we in, I wonder?" you ask.

"The big one, obviously" says the presumptuous philosopher. "Because of anthropic reasoning and all that. Million to one odds."

"Rubbish!" you scream. "Rubbish and poppycock! We're just as likely to be in any hotel omega builds, regardless of the number of observers in that hotel."

"Unless there are no observers, I assume you mean" says the presumptuous philosopher.

"Right, that's a special case where the number of observers in the hotel matters. But except for that it's totally irrelevant!"

"In that case," says the presumptuous philosopher, "I'll make a deal with you. We'll go outside and check, and if we're at the small hotel I'll give you ten bucks. If we're at the big hotel, I'll just smile smugly."

"Hah!" you say. "You just lost an expected five bucks, sucker!"

You run out of the room to find yourself in a huge, ten thousand story attrium, filled with throngs of yourselves and smug looking presumptuous philosophers.