The most common formalizations of Occam's Razor, Solomonoff induction and Minimum Description Length, measure the program size of a computation used in a hypothesis, but don't measure the running time or space requirements of the computation.  What if this makes a mind vulnerable to finite forms of Pascal's Wager?  A compactly specified wager can grow in size much faster than it grows in complexity.  The utility of a Turing machine can grow much faster than its prior probability shrinks.

Consider Knuth's up-arrow notation:

  • 3^3 = 3*3*3 = 27
  • 3^^3 = (3^(3^3)) = 3^27 = 3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3 = 7625597484987
  • 3^^^3 = (3^^(3^^3)) = 3^^7625597484987 = 3^(3^(3^(... 7625597484987 times ...)))

In other words:  3^^^3 describes an exponential tower of threes 7625597484987 layers tall.  Since this number can be computed by a simple Turing machine, it contains very little information and requires a very short message to describe.  This, even though writing out 3^^^3 in base 10 would require enormously more writing material than there are atoms in the known universe (a paltry 10^80).

Now suppose someone comes to me and says, "Give me five dollars, or I'll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills 3^^^^3 people."

Call this Pascal's Mugging.

"Magic powers from outside the Matrix" are easier said than done - we have to suppose that our world is a computing simulation run from within an environment that can afford simulation of arbitrarily large finite Turing machines, and that the would-be wizard has been spliced into our own Turing tape and is in continuing communication with an outside operator, etc.

Thus the Kolmogorov complexity of "magic powers from outside the Matrix" is larger than the mere English words would indicate.  Therefore the Solomonoff-inducted probability, two to the negative Kolmogorov complexity, is exponentially tinier than one might naively think.

But, small as this probability is, it isn't anywhere near as small as 3^^^^3 is large.  If you take a decimal point, followed by a number of zeros equal to the length of the Bible, followed by a 1, and multiply this unimaginably tiny fraction by 3^^^^3, the result is pretty much 3^^^^3.

Most people, I think, envision an "infinite" God that is nowhere near as large as 3^^^^3.  "Infinity" is reassuringly featureless and blank.  "Eternal life in Heaven" is nowhere near as intimidating as the thought of spending 3^^^^3 years on one of those fluffy clouds.  The notion that the diversity of life on Earth springs from God's infinite creativity, sounds more plausible than the notion that life on Earth was created by a superintelligence 3^^^^3 bits large.  Similarly for envisioning an "infinite" God interested in whether women wear men's clothing, versus a superintelligence of 3^^^^3 bits, etc.

The original version of Pascal's Wager is easily dealt with by the gigantic multiplicity of possible gods, an Allah for every Christ and a Zeus for every Allah, including the "Professor God" who places only atheists in Heaven.   And since all the expected utilities here are allegedly "infinite", it's easy enough to argue that they cancel out.  Infinities, being featureless and blank, are all the same size.

But suppose I built an AI which worked by some bounded analogue of Solomonoff induction - an AI sufficiently Bayesian to insist on calculating complexities and assessing probabilities, rather than just waving them off as "large" or "small".

If the probabilities of various scenarios considered did not exactly cancel out, the AI's action in the case of Pascal's Mugging would be overwhelmingly dominated by whatever tiny differentials existed in the various tiny probabilities under which 3^^^^3 units of expected utility were actually at stake.

You or I would probably wave off the whole matter with a laugh, planning according to the dominant mainline probability:  Pascal's Mugger is just a philosopher out for a fast buck.

But a silicon chip does not look over the code fed to it, assess it for reasonableness, and correct it if not.  An AI is not given its code like a human servant given instructions.  An AI is its code.  What if a philosopher tries Pascal's Mugging on the AI for a joke, and the tiny probabilities of 3^^^^3 lives being at stake, override everything else in the AI's calculations?   What is the mere Earth at stake, compared to a tiny probability of 3^^^^3 lives?

How do I know to be worried by this line of reasoning?  How do I know to rationalize reasons a Bayesian shouldn't work that way?  A mind that worked strictly by Solomonoff induction would not know to rationalize reasons that Pascal's Mugging mattered less than Earth's existence.  It would simply go by whatever answer Solomonoff induction obtained.

It would seem, then, that I've implicitly declared my existence as a mind that does not work by the logic of Solomonoff, at least not the way I've described it.  What am I comparing Solomonoff's answer to, to determine whether Solomonoff induction got it "right" or "wrong"?

Why do I think it's unreasonable to focus my entire attention on the magic-bearing possible worlds, faced with a Pascal's Mugging?  Do I have an instinct to resist exploitation by arguments "anyone could make"?  Am I unsatisfied by any visualization in which the dominant mainline probability leads to a loss?  Do I drop sufficiently small probabilities from consideration entirely?  Would an AI that lacks these instincts be exploitable by Pascal's Mugging?

Is it me who's wrong?  Should I worry more about the possibility of some Unseen Magical Prankster of very tiny probability taking this post literally, than about the fate of the human species in the "mainline" probabilities?

It doesn't feel to me like 3^^^^3 lives are really at stake, even at very tiny probability.  I'd sooner question my grasp of "rationality" than give five dollars to a Pascal's Mugger because I thought it was "rational".

Should we penalize computations with large space and time requirements?  This is a hack that solves the problem, but is it true? Are computationally costly explanations less likely?  Should I think the universe is probably a coarse-grained simulation of my mind rather than real quantum physics, because a coarse-grained human mind is exponentially cheaper than real quantum physics?  Should I think the galaxies are tiny lights on a painted backdrop, because that Turing machine would require less space to compute?

Given that, in general, a Turing machine can increase in utility vastly faster than it increases in complexity, how should an Occam-abiding mind avoid being dominated by tiny probabilities of vast utilities?

If I could formalize whichever internal criterion was telling me I didn't want this to happen, I might have an answer.

I talked over a variant of this problem with Nick Hay, Peter de Blanc, and Marcello Herreshoff in summer of 2006.  I don't feel I have a satisfactory resolution as yet, so I'm throwing it open to any analytic philosophers who might happen to read Overcoming Bias.

Pascal's Mugging: Tiny Probabilities of Vast Utilities
New Comment
354 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Why would not giving him $5 make it more likely that people would die, as opposed to less likely? The two would seem to cancel out. It's the same old "what if we are living in a simulation?" argument- it is, at least, possible that me hitting the sequence of letters "QWERTYUIOP" leads to a near-infinity of death and suffering in the "real world", due to AGI overlords with wacky programming. Yet I do not refrain from hitting those letters, because there's no entanglement which drives the probabilities in that direction as opposed to some other random direction; my actions do not alter the expected future state of the universe. You could just as easily wind up saving lives as killing people.

Because he said so, and people tend to be true to their word more often than dictated by chance.

-1Normal_Anomaly
That observation applies to humans, who also tend not to kill large numbers of people for no payoff (that is, if you've already refused the money and walked away).
1Will_Sawin
That's a symmetric effect, though.
7DanielLC
Yes, but they're more likely to kill large numbers of people conditional on you not doing what they say than conditional on you doing what they say.
9Strange7
The mugger claims to not be a 'person' in the conventional sense, but rather an entity with outside-Matrix powers. If this statement is true, then generalized observations about the reference class of 'people' cannot necessarily be considered applicable. Conversely, if it is false, then this is not a randomly-selected person, but rather someone who has started off the conversation with an outrageous profit-motivated lie, and as such cannot be trusted.

They claim to not be a human. They're still a person, in the sense of a sapient being. As a larger class, you'd expect lower correlation, but it would still be above zero.

-3Strange7
I am not convinced that, even among humans speaking to other humans, truth-telling can be assumed when there is such a blatantly obvious incentive to lie. I mean, say there actually is someone who can destroy vast but currently-unobservable populations with less effort than it would take them to earn $5 with conventional economic activity, and the ethical calculus works out such that you'd be better served to pay them $5 than let it happen. At that point, aren't they better served to exaggerate their destructive capacity by an order of magnitude or two, and ask you for $6? Or $10? Once the number the mugger quotes exceeds your ability to independently confirm, or even properly imagine, the number itself becomes irrelevant. It's either a display of incomprehensibly overwhelming force, to which you must submit utterly or be destroyed, or a bluff you should ignore.
6DanielLC
There is no blatantly obvious reason to want to torture the people only if you do give him money. So, you're saying that the problem is that, if they really were going to kill 3^^^3 people, they'd lie? Why? 3^^^3 isn't just enough to get $5. It's enough that the expected seriousness of the threat is unimaginably large. Look at it this way: If they're going to lie, there's no reason to exaggerate their destructive capacity by an order of magnitude when they can just make up a number. If they choose to make up a number, 3^^^3 is plenty high. As such, if it really is 3^^^3, they might as well just tell the truth. If there's any chance that they're not lying given that they really can kill 3^^^3 people, their threat is valid. It's one thing to be 99.9% sure they're lying, but here, a 1 - 1/sqrt(3^^^3) certainty that they're lying still gives more than enough doubt for an unimaginably large threat. You're not psychic. You don't know which it is. In this case, the risk of the former is enough to overwhelm the larger probability of the latter.
1Strange7
Not the way I do the math. Let's say you're a sociopath, that is, the only factors in your utility function are your own personal security and happiness. Two unrelated people approach you simultaneously, one carrying a homemade single-shot small-caliber pistol (a 'zip gun') and the other apparently unarmed. Both of them, separately, demand $10 in exchange for not killing you immediately. You've got a $20 bill in your wallet; the unarmed mugger, upon learning this, obligingly offers to make change. While he's thus distracted, you propose to the mugger with the zip gun that he shoot the unarmed mugger, and that the two of you then split the proceeds. The mugger with the zipgun refuses, explaining that the unarmed mugger claims to be close personal friends with a professional sniper, who is most likely observing this situation from a few hundred yards away through a telescopic sight and would retaliate against anyone who hurt her friend the mugger. The mugger with the zip gun has never actually met the sniper or directly observed her handiwork, but is sufficiently detered by rumor alone. If you don't pay the zip-gun mugger, you'll definitely get shot at, but only once, and with good chances of a miss or nonfatal injury. If you don't pay the unarmed mugger, and the sniper is real, you will almost certainly die before you can determine her position or get behind sufficiently hard cover. If you pay them both, you will have to walk home through a bad part of town at night instead of taking the quicker-and-safer bus, which apart from the inconvenience might result in you being mugged a third time. How would you respond to that? I don't need to be psychic. I just do the math. Taking any sort of infinitessimally-unlikely threat so seriously that it dominates my decisionmaking means anyone can yank my chain just by making a few unfounded assertions involving big enough numbers, and then once word gets around, the world will no longer contain acceptable outcomes.
7DanielLC
In your example, only you die. In Pascal's mugging, it's unimaginably worse. Do you accept that, in the circumstance you gave, you are more likely to be shot by a sniper if you only pay one mugger? Not significantly more likely, but still more likely? If so, that's analogous to accepting that Pascal's mugger will be more likely to make good on his threat if you don't pay.
0Strange7
In my example, the person making the decision was specified to be a sociopath, for whom there is no conceivable worse outcome than the total loss of personal identity and agency associated with death. The two muggers are indifferent to each other's success. You could pay off the unarmed mugger to eliminate the risk of being sniped (by that particular mugger's friend, at least, if she exists; there may well be other snipers elsewhere in town with unrelated agendas, about whom you have even less information) and accept the risk of being shot with the zip gun, in order to afford the quicker, safer bus ride home. In that case you would only be paying one mugger, and still have the lowest possible sniper-related risk. The three possible expenses were meant as metaphors for existential risk mitigation (imaginary sniper), infrastructure development (bus), and military/security development (zip gun), the latter two forming the classic guns-or-butter economic dilemma. Historically speaking, societies that put too much emphasis, too many resources, toward preventing low-probability high-impact disasters, such as divine wrath, ended up succumbing to comparatively banal things like famine, or pillaging by shorter-sighted neighbors. What use is a mathematical model of utility that would steer us into those same mistakes?
3DanielLC
Is your problem that we'd have to keep the five dollars in case of another mugger? I'd hardly consider the idea of steering our life around pascal's mugging to be disagreeing with it. For what it's worth, if you look for hypothetical pascal's muggings, expected utility doesn't converge and decision theory breaks down.

Let's say you're a sociopath, that is, the only factors in your utility function are your own personal security and happiness.

Can we use the less controversial term 'economist'?

7Relenzo
I think this answer contains something important-- Not so much an answer to the problem, but a clue to the reason WHY we intuitively, as humans, know to respond in a way which seems un-mathematical. It seems like a Game Theory problem to me. Here, we're calling the opponents' bluff. If we make the decision that SEEMINGLY MAXIMIZES OUR UTILITY, according to game theory we're set up for a world of hurt in terms of indefinite situations where we can be taken advantage of. Game Theory already contains lots of situations where reasons exist to take action that seemingly does not maximize your own utility.
0RST
It is threatening people just to test you. We can assume that Its behavior is completely different from ours. So Tom's argument still works.
0MrCheeze
Yes, but the chance of magic powers from outside the matrix is low enough that what he says has an insignificant difference. ...or is an insignificant difference even possible?
2DanielLC
The chance of magic powers from outside the matrix is nothing compared to 3^^^^3. It makes no difference in whether or not it's worth while to pay him.
-2Dmytry
excellent point, sir.
0TraderJoe
[comment deleted]

Very interesting thought experiment!

One place where it might fall down is that our disutility for causing deaths is probably not linear in the number of deaths, just as our utility for money flattens out as the amount gets large. In fact, I could imagine that its value is connected to our ability to intuitively grasp the numbers involved. The disutility might flatten out really quickly so that the disutility of causing the death of 3^^^^3 people, while large, is still small enough that the small probabilities from the induction are not overwhelmed by it.

3DanielLC
That just means you have to change the experiment. Suppose he just said he'll cause a certain amount of net disutility, without specifying how. This works unless you assume a maximum possible disutility.
6Ulysses
You are not entitled to assume a maximum disutility, even if you think you see a proof for it (see Confidence Levels Inside and Outside an Argument).
5themusicgod1
link for the lazy

People say the fact that there are many gods neutralizes Pascal’s wager - but I don't understand that at all. It seems to be a total non sequetor. Sure, it opens the door to other wagers being valid, but that is a different issue.

Lets say I have a simple game against you where, if I choose 1 I win a lotto ticket and if I choose 0 I loose. There is also a number of other games tables around the room with people winning or not winning lotto tickets. If I want to win the lotto, what number should I pick?

Also I don't tink there is a fundimental issue with havi... (read more)

4Dojan
There is one problem with having favor of several gods simultaneously: In fact, one could argue that being a true orthodox christian would lead you to the muslim, hindu, protestant and scientology (etc.) hells, while choosing anyone of them would subtract that hell but add the hell of whatever religion you left... I try to stay away for safety's sake :) [edit: spelling]

This is an instance of the general problem of attaching a probability to matrix scenarios. And you can pascal-mug yourself, without anyone showing up to assert or demand anything - just think: what if things are set up so that whether I do, or do not do, something, determines whether those 3^^^^3 people will be created and destroyed? It's just as possible as the situation in which a messenger from Outside shows up and tells you so.

The obvious way to attach probabilities to matrix scenarios is to have a unified notion of possible world capacious enough to e... (read more)

Tom and Andrew, it seems very implausible that someone saying "I will kill 3^^^^3 people unless X" is literally zero Bayesian evidence that they will kill 3^^^^3 people unless X. Though I guess it could plausibly be weak enough to take much of the force out of the problem.

Andrew, if we're in a simulation, the world containing the simulation could be able to support 3^^^^3 people. If you knew (magically) that it couldn't, you could substitute something on the order of 10^50, which is vastly less forceful but may still lead to the same problem.

Andrew and Steve, you could replace "kill 3^^^^3 people" with "create 3^^^^3 units of disutility according to your utility function". (I respectfully suggest that we all start using this form of the problem.)

Michael Vassar has suggested that we should consider any number of identical lives to have the same utility as one life. That could be a solution, as it's impossible to create 3^^^^3 distinct humans. But, this also is irrelevant to the create-3^^^^3-disutility-units form.

IIRC, Peter de Blanc told me that any consistent utility function must have an upper bound (meaning that we must discount lives like Steve sug... (read more)

create 3^^^^3 units of disutility according to your utility function

For all X:

If your utility function assigns values to outcomes that differ by a factor of X, then you are vulnerable to becoming a fanatic who banks on scenarios that only occur with probability 1/X. As simple as that.

If you think that banking on scenarios that only occur with probability 1/X is silly, then you have implicitly revealed that your utility function only assigns values in the range [1,Y], where Y<X, and where 1 is the lowest utility you assign.

6Nick_Tarleton
... or your judgments of silliness are out of line with your utility function.
7SforSingularity
When I said "Silly" I meant from an axiological point of view, i.e. you think the scenario over, and you still think that you would be doing something that made you win less. Of course in any such case, there are likely to be conflicting intuitions: one to behave as an aggregative consequentialist, and the another to behave like a sane human being.
0[anonymous]
What if we required that the utility function grow no faster than the Kolmogorov complexity of the scenario? This seems like a suitable generalization of Vassar's proposal.

Mitchell, it doesn't seem to me like any sort of accurate many-worlds probability calculation would give you a probability anywhere near low enough to cancel out 3^^^^3. Would you disagree? It seems like there's something else going on in our intuitions. (Specifically, our intuitions that an good FAI would need to agree with us on this problem.)

Sorry, the first link was supposed to be to Absence of Evidence is Evidence of Absence.

Mitchell, I don't see how you can Pascal-mug yourself. Tom is right that the possibility that typing QWERTYUIOP will destroy the universe can be safely ignored; there is no evidence either way, so the probability equals the prior, and the Solomonoff prior that typing QWERTYUIOP will save the universe is, as far as we know, exactly the same. But the mugger's threat is a shred of Bayesian evidence that you have to take into account, and when you do, it massively tips the expected utility balance. Your suggested solution does seem right but utterly intractable.

[-]DSimon130

I don't think the QWERTYUIOP thing is literally zero Bayesian evidence either. Suppose the thought of that particular possibility was manually inserted into your mind by the simulation operator.

Tom and Andrew, it seems very implausible that someone saying "I will kill 3^^^^3 people unless X" is literally zero Bayesian evidence that they will kill 3^^^^3 people unless X. Though I guess it could plausibly be weak enough to take much of the force out of the problem.

Nothing could possibly be that weak.

Tom is right that the possibility that typing QWERTYUIOP will destroy the universe can be safely ignored; there is no evidence either way, so the probability equals the prior, and the Solomonoff prior that typing QWERTYUIOP will save the universe is, as far as we know, exactly the same.

Exactly the same? These are different scenarios. What happens if an AI actually calculates the prior probabilities, using a Solomonoff technique, without any a priori desire that things should exactly cancel out?

5Strange7
Why would an AI consider those two scenarios and no others? Seems more likely it would have to chew over every equivalently-complex hypothesis before coming to any actionable conclusion... at which point it stops being a worrisome, potentially world-destroying AI and becomes a brick, with a progress bar that won't visibly advance until after the last proton has decayed.
1Arandur
... which doesn't solve the problem, but at least that AI won't be giving anyone... five dollars? Your point is valid, but it doesn't expand on anything.
-2Strange7
More generally I mean that an AI capable of succumbing to this particular problem wouldn't be able to function in the real world well enough to cause damage.
-2Arandur
I'm not sure that was ever a question. :3
6ialdabaoth
Well, let's think about this mathematically. In other articles, you have discussed the notion that, in an infinite universe, there exist with probability 1 identical copies of me some 10^(10^29) {span} away. You then (correctly, I think) demonstrate the absurdity of declaring that one of them in particular is 'really you' and another is a 'mere copy'. When you say "3^^^^3 people", you are presenting me two separate concepts: 1. Individual entities which are each "people". 2. A set {S} of these entities, of which there are 3^^^^3 members. Now, at this point, I have to ask myself: "what is the probability that {S} exists?" By which I mean, what is the probability that there are 3^^^^3 unique configurations, each of which qualifies as a self-aware, experiencing entity with moral weight, without reducing to an "effective simulation" of another entity already counted in {S}? Vs. what is the probability that the total cardinality of unique configurations that each qualify as self-aware, experiencing entities with moral weight, is < 3^^^^3? Because if we're going to juggle Bayesian probabilities here, at some point that has to get stuck in the pipe and smoked, too.

OK, let's try this one more time:

  1. Even if you don't accept 1 and 2 above, there's no reason to expect that the person is telling the truth. He might kill the people even if you give him the $5, or conversely he might not kill them even if you don't give him the $5.

To put it another way, conditional on this nonexistent person having these nonexistent powers, why should you be so sure that he's telling the truth? Perhaps you'll only get what you want by not giving him the $5. To put it mathematically, you're computing pX, where p is the probability and ... (read more)

I have to go with Tom McGabe on this one; This is just a restatement of the core problem of epistemology. It's not unique to AI, either.

3. Even if you don't accept 1 and 2 above, there's no reason to expect that the person is telling the truth. He might kill the people even if you give him the $5, or conversely he might not kill them even if you don't give him the $5.

But if a Bayesian AI actually calculates these probabilities by assessing their Kolmogorov complexity - or any other technique you like, for that matter - without desiring that they come out exactly equal, can you rely on them coming out exactly equal? If not, an expected utility differential of 2 to the negative googolplex times 3^^^^3 still equals 3^^^^3, so whatever tiny probability differences exist will dominate all calculations based on what we think of as the "real world" (the mainline of probability with no wizards).

if you have the imagination to imagine X to be super-huge, you should be able to have the imagination to imagine p to be super-small

But we can't just set the probability to anything we like. We have to calculate it, and Kolmogorov complexity, the standard accepted method, will not be anywhere near that super-small.

Addendum: In computational terms, you can't avoid using a 'hack'. Maybe not the hack you described, but something, somewhere has to be hard-coded. How else would you avoid solipsism?

This case seems to suggest the existence of new interesting rationality constraints, which would go into choosing rational probabilities and utilities. It might be worth working out what constraints one would have to impose to make an agent immune to such a mugging.

Eliezer,

OK, one more try. First, you're picking 3^^^^3 out of the air, so I don't see why you can't pick 1/3^^^^3 out of the air also. You're saying that your priors have to come from some rigorous procedure but your utility comes from simply transcribing what some dude says to you. Second, even if for some reason you really want to work with the utility of 3^^^^3, there's no good reason for you not to consider the possibility that it's really -3^^^^3, and so you should be doing the opposite. The issue is not that two huge numbers will exactly cancel o... (read more)

1DanielLC
You're not picking 3^^^^3 out of the air. The other guy told you that number. You can't pick probabilities out of the air. If you could, why not just set the probability that you're God to one? With what probability? Would you give money to a mugger if their gun probably isn't loaded? Is this example fundamentally different?
1Kenny
I think you're on to something, but I think the key is that someone claiming being able to influence 3^^^^3 of anything, let alone 3^^^^3 "people", is such an extraordinary claim that it would require extraordinary evidence of a magnitude similar to 3^^^^3, i.e. I bet we're vastly underestimating the complexity of what our mugger is claiming.

pdf23ds, under certain straightforward physical assumptions, 3^^^^3 people wouldn't even fit in anyone's future light-cone, in which case the probability is literally zero. So the assumption that our apparent physics is the physics of the real world too, really could serve to decide this question. The only problem is that that assumption itself is not very reasonable.

Lacking for the moment a rational way to delimit the range of possible worlds, one can utilize what I'll call a Chalmers prior, which simply specifies directly how much time you will spend thi... (read more)

1pnrjulius
I'm not aware of any (and I'm not sure it really solves this problem in particular), but there should be, because processing time is absolutely critical to bounded rationality.
[-]bw2-20

Well... I think we act diffrently from the AI because we not only know Pascals Mugging, we know that it is known. I don't see why an AI could not know the knowledge of it, though, but you do not seem to consider that, which might simply show that it is not relevant, as you, er, seem to have given this some thought...

[-]bw2-30

But maybe an AI cannot in fact know the knowledge of something.

3Alsadius
What possible reason would you have to assume that? If we're talking about an actually intelligent AI, it'd presumably be as smart as any other intelligent being(like, say, a human). If we're talking about a dumb program, it can take into account anything that we want it to take into account.

Konrad: In computational terms, you can't avoid using a 'hack'. Maybe not the hack you described, but something, somewhere has to be hard-coded.

Well, yes. The alternative to code is not solipsism, but a rock, and even a rock can be viewed as being hard-coded as a rock. But we would prefer that the code be elegant and make sense, rather than using a local patch to fix specific problems as they come to mind, because the latter approach is guaranteed to fail if the AI becomes more powerful than you and refuses to be patched.

Andrew: You're saying that your... (read more)

[-]bw200

Well, are you going to give us your answer?

[-]Laura-10

To solve this problem, the AI would need to calculate the probability of the claim being true, for which it would need to calculate the probability of 3^^^^3 people even existing. Given what it knows about the origins and rate of reproduction of humans, wouldn't the probability of 3^^^^3 people even existing be approximately 1/3^^^^3? It's as you said, multiply or divide it by the number of characters in the bible, it's still nearly the same damned incomprehensably large number. Unless you are willing to argue that there are some bizarre properties of t... (read more)

[-]Laura-20

Here's one for you: Lets assume for arguement's sake that "humans" could include human cosciousnesses, not just breathing humans. Then, if a universe with 3^^^^3 "humans" actually existed, what would be the odds that they were NOT all copies of the same parasitic consciousness?

Pascal's wager type arguments fail due to their symmetry (which is preserved in finite cases).

[-]ChrisA-20

Eliezer Sorry to say (because it makes me sound callous), but if someone can and is willing to create and then destroy 3^^^3 people for less than $5, then there is no value in life, and definitely no moral structure to the universe. The creation and destruction of 3^^^3 people (or more) is probably happening all the time. Therefore the AI is safe declining the wager on purely selfish grounds.

6pnrjulius
So, if there is someone out there committing grevious holocausts (if we use realistic numbers like "10 million deaths", "20 billion deaths", the probability of this is near 1), then none of us have any moral obligations ever?
0kokotajlod
I guess so. It's an interesting idea--kind of like social cooperation problems like recycling; if too many other people are not doing it, then there isn't much point in doing it yourself. Applying it to morality is interesting. But wrong, I think.

Eliezer, I'd like to take a stab at the internal criterion question. One differerence between me and the program you describe is that I have a hoped for future. Say "I'd like to play golf on Wednesday." Now, I could calculate the odds of Wednesday not actually arriving (nuclear war,asteroid impact...), or me not being alive to see it (sudden heartattack...), and I would get an answer greater than zero. Why don't I operate on those non-zero probabilities? (The other difference between me and the program you describe) I think it has to do with ... (read more)

IIRC, Peter de Blanc told me that any consistent utility function must have an upper bound (meaning that we must discount lives like Steve suggests). The problem disappears if your upper bound is low enough. Hopefully any realistic utility function has such a low upper bound, but it'd still be a good idea to solve the general problem.

Nick, please see my blog (just click on my name). I have a post about this.

"Let the differential be negative. Same problem. If the differential is not zero, the AI will exhibit unreasonable behavior. If the AI literally thinks in Solomonoff induction (as I have described), it won't want the differential to be zero, it will just compute it."

How can a computation arrive at a nonzero differential, starting with zero data? If I ask a rational AI to calculate the probability of me typing "QWERTYUIOP" saving 3^^^^3 human lives, it knows literally nothing about the causal interactions between me and those lives, because they are totally unobservable.

GeniusNZ, you have to consider not only all proposed gods, but all possible gods and reward/punishment structures. Since the number and range of conceivable divine rewards and punishments is infinite for each action, the incentives are all equally balanced, and thus give you no reason to prefer one action over another.

Ultimately, I think Tom McCabe is right -- the truth of a proposition depends in part on its meaningfulness.

What is the probability that the sun will rise tomorrow? Nearly 1, if you're thinking of dawns. Nearly 0, if you're thinking of Cop... (read more)

I generally share Tom McCabe's conclusion, that is, that they exactly cancel out because a symmetry has not been broken. The reversed hypothesis has the same complexity as the original hypothesis, and the same evidence supporting it. No differential entanglement. However, I think that this problem is worth attention because a) so many people who normally agree disagree here, and b) I suspect that the problem is related to normal utilitarianism with no discounting and an unbounded future. Of course, we already have some solutions in that case and we sho... (read more)

Benquo, replace "kill 3^^^^3 people" with "create 3^^^^3 disutility units" and the problem reappears.

Michael, do you really think the mugger's statement is zero evidence?

It seems to me that the cancellation is an artifact of the particular example, and that it would be easy to come up with an example in which the cancellation does not occur. For example, maybe you have previous experience with the mugger. He has mugged you before about minor things and sometimes you have paid him and sometimes not. In all cases he has been true to his word. This would seem to tip the probabilities at least slightly in favor of him being truthful about his current much larger threat.

0Strange7
Even in that case I would assign enormously higher probability to the hypothesis that my deadbeat pal has caught some sort of brain disease that results in compulsive lying, than that such a person has somehow acquired reality-breaking powers but still has nothing better to do than hit me up for spare change.
6rebellionkid
Enormously higher probability is not 1. This still doesn't mean the statement is zero evidence.
2dlthomas
I don't know - if he did actually have reality breaking powers, he would likely be tempted to put them to more effective use. If he would in fact be less likely to be making the statement were it true, then it is evidence against, not evidence for, the truth of his statement.

However clever your algorithm, at that level, something's bound to confuse it. Gimme FAI with checks and balances every time.

0pnrjulius
Is there a Godel sentence for human consciousness? (My favorite proposal so far is: "I cannot honestly assert this sentence.")
3orthonormal
It's definitely clever, but it's not quite what a Gödel sentence for us would be- it would seem to us to be an intractable statement about something else, and we'd be incapable of comprehending it as an indirect reference to our processes of understanding. So, in particular, a human being can't write the Gödel sentence for humans. Also, you've only been commenting for a few days- why not say hello on the welcome thread?

You could always just give up being a consequentialist and ontologically refuse to give in to the demands of anyone taking part in a Pascal mugging because consistently doing so would lead to the breakdown of society.

Re: "However clever your algorithm, at that level, something's bound to confuse it. Gimme FAI with checks and balances every time."

I agree that a mature Friendly Artificial Intelligence should defer to something like humanity's volition.

However, before it can figure out what humanity's volition is and how to accomplish it, an FAI first needs to:

  1. self-improve into trans-human intelligence while retaining humanity's core goals
  2. avoid UnFriendly Behavior (for example, murdering people to free up their resources) in the process of doing step (1)

If ... (read more)

Rolf: I agree with everything you just said, especially the bit about patches and hacks. I just wouldn't be happy having a FAI's sanity dependent on any single part of it's design, no matter how perfect and elegant looking, or provably safe on paper, or demonstrably safe in our experiments.

However clever your algorithm, at that level, something's bound to confuse it.

Odd, I've been reading moral paradoxes for many years and my brain never crashed once, nor have I turned evil. I've been confused but never catastrophically so (though I have to admit my younger self came close). My algorithm must be "beyond clever".

That's a remarkable level of resilience for a brain design which is, speaking professionally, a damn ugly mess. If I can't do aspire to do at least that well, I may as well hang up my shingle and move in with the ducks.

5Strange7
The modern human nervous system is the result of upwards of a hundred thousand years of brutal field-testing. The basic components, and even whole submodules, can be traced back even further. A certain amount of resiliency is to be expected. If you want to start from scratch and aspire to the same or higher standards of performance, it might be sensible to be prepared to invest the same amount of time and capital that the BIG did. That you have not yet been crippled by a moral paradox or other standard rhetorical trick is comparable to saying that a server remains secure after a child spent an afternoon poking around with it and trying out lists of default passwords: a good sign, certainly, and a test many would fail, but not in itself proof of perfection.
3pnrjulius
Indeed, on a list of things we can expect evolved brains to be, ROBUST is very high on the list. ("rational" is actually rather hard to come by. To some degree, rationality improves fitness. But often its cost outweighs its benefit, hence the sea slug.)
0chaosmosis
Additionally, people throw away problems if they can't solve the answer or if getting the specifics of the answer are beyond their limits. A badly designed AI system wouldn't have that option and so would be paralyzed by calculation. I agree with the commenter above who said the best thing to stop anything like this from happening is an AI system with checks and balances which automatically throws out certain problems. In the abstract, that might conceivably be bad. In the real world it probably won't be. Probably isn't very inspiring or logically compelling but I think it's the best that we can do. Unless we design the first AI system with a complex goal system oriented around fixing itself that basically boils down to "do your best to find and solve any problems or contradictions within your system, ask for our help whenever you are unsure of an answer, then design a computer which can do the same task better than you, etc, then have the final computer begin the actual work of an AI". The thought comes from Douglas Adams' Hitchhiker books, I forget the names of the computers but it doesn't matter. To anyone who says it's impossible or unfeasible to implement something like this: note that having one biased computer attempt to correct its own biases and create a less biased computer is in all relevant ways equivalent to having one biased human attempt to correct its own biases and create a less biased computer.

Give me five dollars, or I will kill as many puppies as it takes to make you. And they'll go to hell. And there in that hell will be fire, brimstone, and rap with Engrish lyrics.

I think the problem is not Solomonoff inducton or Kolmogorov complexity or Bayesian rationality, whatever the difference is, but you. You don't want an AI to think like this because you don't want it to kill you. Meanwhile, to a true altruist, it would make perfect sense.

Not really confident. It's obvious that no society of selfish beings whose members think like this could function. But they'd still, absurdly, be happier on average.

0pnrjulius
Well, in that case, one possible response is for me to kill YOU (or report you to the police who will arrest you for threatening mass animal cruelty). But if you're really a super-intelligent being from beyond the simulation, then trying to kill you will inevitably fail and probably cause those 3^^^^3 people to suffer as a result. (The most plausible scenario in which a Pascal's Mugging occurs? Our simulation is being tested for its coherence in expected utility calculations. Fail the test and the simulation will be terminated.)
[-]g30

You don't need a bounded utility function to avoid this problem. It merely has to have the property that the utility of a given configuration of the world doesn't grow faster than the length of a minimal description of that function. (Where "minimal" is relative to whatever sort of bounded rationality you're using.)

It actually seems quite plausible to me that our intuitive utility-assignments satisfy something like this constraint (e.g., killing 3^^^^^3 puppies doesn't feel much worse than killing 3^^^^3 puppies), though that might not matter muc... (read more)

Nick Tarleton, you say:

"Benquo, replace "kill 3^^^^3 people" with "create 3^^^^3 disutility units" and the problem reappears."

But what is a disutility unit? How can there be that many? How do you know that what he supposes to be a disutility unit isn't from your persective a utility unit?

Any similarly outlandish claim is a challenge not merely to your beliefs, but to your mental vocabulary. It can't be evaluated for probability until it's evaluated for meaning.

Utility functions have to be bounded basically because genuine martingales screw up decision theory -- see the St. Petersburg Paradox for an example.

Economists, statisticians, and game theorists are typically happy to do so, because utility functions don't really exist -- they aren't uniquely determined from someone's preferences. For example, you can multiply any utility function by a constant, and get another utility function that produces exactly the same observable behavior.

-2pnrjulius
In the INDIVIDUAL case that is true. In the AGGREGATE case it's not.
0[anonymous]
I always wondered why people believe utility functions are U(x): R^n -> R^1 for some n. I'm no decision theorist, but I see no reason utilities can't function on the basis of a partial ordering rather than a totally ordered numerical function.
1Vaniver
The total ordering is really nice because it means we can move from the messy world of outcomes to the neat world of real numbers, whose values are probabilistically relevant. If we move from total ordering to partial ordering, then we are no longer able to make probabilistic judgments based only on the utilities. If you have some multidimensional utility function, and a way to determine your probabilistic preferences between any uncertain gamble between outcomes x and y and a certain outcome z, then I believe you should be able to find the real function that expresses those probabilistic preferences, and that's your unidimensional utility function. If you don't have that way to determine your preferences, then you'll be indecisive, which is not something we like to build in to our decision theories.

Tiiba, keep in mind that to an altruist with a bounded utility function, or with any other of Peter's caveats, in may not "make perfect sense" to hand over the five dollars. So the problem is solveable in a number of ways, the problem is to come up with a solution that (1) isn't a hack and (2) doesn't create more problems than in solves.

Anyway, like most people, I'm not a complete utilitarian altruist, even at a philosophical level. Example: if an AI complained that you take up too much space and are mopey, and offered to kill you and replace you... (read more)

-1pnrjulius
Though, if the AI is a true utilitarian, why must it kill you in order to make the midgets? Aren't there plenty of asteroids that can be nanofabricated into midgets instead?
9pnrjulius
Candidate for weirdest sentence ever uttered: "Aren't there plenty of asteroids that can be nanofabricated into midgets instead?"

That's a remarkable level of resilience for a brain design which is, speaking professionally, a damn ugly mess.

...with vital functions inherited from reptiles. But it's been tested to death through history, serious failures thrown out at each step, and we've lots of practical experience and knowledge about how and why it fails. It wasn't built and run first go with zero unrecoverable errors.

I'm not advocating using evolutionary algorithms or to model from the human brain like Ray Kurzweil. I just mean I'd allow for unexpected breakdowns in any part of the ... (read more)

I think that if you consider that the chance of a threat to cause a given amount of disutility being valid is a function of the amount of disutility then the problem mostly goes away. That is, in my experience any threat to cause me X units of disutility where X is beyond some threshold is less than 1/10 as credible as a threat to cause me 1 unit of disutility. If someone threatened to kill another person unless I gave them $5000 I would be worried. If they threatened to kill 10 poeple I would be very slightly less worried. If they threatened to kill ... (read more)

"Odd, I've been reading moral paradoxes for many years and my brain never crashed once, nor have I turned evil."

Even if it hasn't happened to you, it's quite common- think about how many people under Stalin had their brains programmed to murder and torture. Looking back and seeing how your brain could have crashed is scary, because it isn't particularly improbable; it almost happened to me, more than once.

g: killing 3^^^^^3 puppies doesn't feel much worse than killing 3^^^^3 puppies

...

..........................

I hereby award G the All-Time Grand Bull Moose Prize for Non-Extensional Reasoning and Scope Insensitivity.

Clough: On the contrary, I think it is not only that weak but actually far weaker. If you are willing to consider the existance of things like 3^^^3 units of disutility without considering the existence of chances like 1/4^^^4 then I believe that is the problem that is causing you so much trouble.

I'm certainly willing to consider the existence o... (read more)

If you believe in the many worlds interpretation of quantum mechanics, you have to discount the utility of each of your future selves by his measure, instead of treating them all equally. The obvious generalization of this idea is for the altruist to discount the utility he assigns to other people by their measures, instead of treating them all equally.

But instead of using the QM measure (which doesn't make sense "outside the Matrix"), let the measure of each person be inversely related to his algorithmic complexity (his personal algorithmic comp... (read more)

Wei, would it be correct to say that, under your interpretation, if our universe initially contains 100 super happy people, that creating one more person who is "very happy" but not "super happy" is a net negative, because the "measure" of all the 100 super happy people gets slightly discounted by this new person?

It's hard to see why I would consider this the right thing to do - where does this mysterious "measure" come from?

Eliezer, do you think it would be suitable for a blog post here?

Mm... sure. "Bias against uncomputability."

0pnrjulius
That's a much more general problem, the problem of whether to use sums or averages in utility calculations with changing population size.

"Would any commenters care to mug Tiiba? I can't quite bring myself to do it, but it needs doing."

If you don't donate $5 to SIAI, some random guy in China will die of a heart attack because we couldn't build FAI fast enough. Please donate today.

5SecondWind
That's not a proper mugging. "If you don't donate $5 to SIAI, the entire multiverse will be paperclip'd because we couldn't build FAI before uFAI took over."

Eli,

I agree that G's reasoning is an example of scope insensitivity. I suspect you meant this as a criticism. It seems undeniable that scope insensitivity leads to some irrational attitudes (e.g. when a person who would be horrified at killing one human shrugs at wiping out humanity). However, it doesn't seem obvious that scope insensitivity is pure fallacy. Mike Vassar's suggestion that "we should consider any number of identical lives to have the same utility as one life" seems plausible. An extreme example is, what if the universe were periodi... (read more)

Vann McGee has proven that if you have an agent with an unbounded utility function and who thinks there are infinitely many possible states of the world (ie, assigns them probability greater than 0), then you can construct a Dutch book against that agent. Next, observe that anyone who wants to use Solomonoff induction as a guide has committed to infinitely many possible states of the world. So if you also want to admit unbounded utility functions, you have to accept rational agents who will buy a Dutch book.

And if you do that, then the subjectivist justifi... (read more)

G,

I was essentially agreeing with you that killing 3^^^^^3 vs 3^^^^3 puppies may not be ethically distinct. I would call this scope insensitivity. My suggestion was that scope insensitivity is not necessarily always unjustified.

Eliezer, creating another person in addition to 100 super happy people do not reduce the measures of those 100 super happy people. For example, suppose those 100 super happy people are living in a classical universe computed by some TM. The minimal information needed to locate each person in this universe is just his time/space coordinate. Creating another person does not cause an increase in that information for the existing people.

Is the value of my existence steadily shrinking as the universe expands and it requires more information to locate me in space?

If I make a large uniquely structured arrow pointing at myself from orbit so that a very simple Turing machine can scan the universe and locate me, does the value of my existence go up?

I am skeptical that this solution makes moral sense, however convenient it might be as a patch to this particular problem.

0Strange7
Yes. Doing something like that proves you're clever enough to come up with a plan for something that's unique in all the universe, and then marshal the resources to make it happen. That's worth something.
3atorm
No. He is either clever enough or not. Proving it doesn't change his value.
1Vulture
(I originally had a much longer comment, but it was lost in some sort of website glitch. This is the Reader's Digest version) I think algorithmic complexity does, to a certain degree, usefully represent what we value about human life: uniqueness of experience, depth of character, whatever you want to call it. For myself, at least, I would feel fewer qualms about Matrix-generating 100 atom-identical Smiths and then destroying them, than I would generating 100 individual, diverse people who eacvh had different personalities, dreams, judgements, and feelings. It evens captures the basic reason, I think, behind scope insensitivity; namely, that we see the number on paper as just a faceless mob of many, many, identical people, so we have no emotional investment in them as a group. On the other hand, I had a bad feeling when I read this solution, which I still have now. Namely, it solves the dilemma, but not at the point where it's problematic; we can immediately tell that there's something wrong with handing over five bucks when we read about it, and it has little to with the individual uniqueness of the people in question. After all, who should you push from the path of an oncoming train: Jaccqkew'Zaa'KK, The Uniquely Damaged Sociopath (And Part-Time Rapist), or a hard-working, middle-aged, balding office worker named Fred Jones?
0atorm
Are you replying to the correct comment? If so, I don't understand what you mean, but I'm pretty sure Jaccqkew'Zaa'KK goes under the train. Which is a tragedy if he just has cruel friends who give terrible nicknames.
0Vulture
I'm replying to Atorm's disputation of Strange7's response to Eliezer's response to Wei Dai's idea about using algorithmic complexity as a moral principle as a solution to the Pascal's Mugging dilemma. If I got that chain wrong and I'm responding to some completely different discussion, then I apologize for confusing everyone and it would be nice if you could point me to the thread I'm looking for. :) (And yes, Jaccqkew'Zaa'KK goes under the train, and he really is sociopathic rapist; I was using that thought experiment as an example of a situation where the algorithmic complexity rule doesn't work)
0atorm
Regarding your second paragraph: which solution are you referring to. I see no mention of five bucks anywhere in this conversation.
0Vulture
Sorry if I was unclear, since I was jumping around a bit; five bucks is the cash demanded by the "mugger" in the original post.
[-]g00

Stephen, you can't have been agreeing with me about that since I didn't say it, even though for some reason I don't understand (perhaps I was very unclear, but I don't see how) Eliezer chose to interpret me doing so and indeed going further to say that it isn't ethically distinct.

Random question:

The number of possible Turing machines is countable. Given a function that maps the natural numbers onto the set of possible Turing machines, one can construct a Turing machine that acts like this:

If machine #1 has not halted, simulate the execution of one instruction of machine #1

If machine #2 has not halted, simulate the execution of one instruction of machine #2
If machine #1 has not halted, simulate the execution of one instruction of machine #1

If machine #3 has not halted, simulate the execution of one instruction of machine #3
If mach... (read more)

0pnrjulius
I've long felt that simulations are NOT the same as actual realities, though I can't precisely articulate the difference.
0wedrifid
One of them has some form of computational device on the outside. One of them doesn't. Does there need to be more difference than that? ie. If you want to treat them differently and if some sort of physical distinction between the two is possible then by all means consider them different based on that difference.
3bzealley
The answer seems fairly simple under modal realism (roughly, the thesis that all logically possible worlds exist in the same sense as mathematical facts exist, and thus that the term "actual" in "our actual world" is just an indexical). If the simulation accurately follows a possible world, and contains a unit of (dis)utility, it doesn't generate that unit of (dis)utility, it just "discovers" it; it proves that for a given world-state an event happens which your utility function assigns a particular value. Repeating the simulation again is also only rediscovering the same fact, not in any sense creating copies of it.

As others have basically said:

Isn't the point essentially that we believe the man's statement is uncorrelated with any moral facts? I mean if we did, then its pretty clear we can be morally forced into doing something.

Is it reasonable to believe the statement is uncorrelated with any facts about the existence of many lives? It seems so, since we have no substantial experience with "Matrices", people from outside the simulation visting us, 3^^^^^^3, the simulation of moral persons, etc...

Consider, the statement 'there is a woman being raped aro... (read more)

Eliezer, you can interpret rocks as minds if you make the interpretation complex enough. Why do you ignore these rock-minds if not because you discount them for algorithmic complexity?

First, questions like "if the agent expects that I wouldn't be able to verify the extreme disutility, would its utility function be such as to actually go through spending the resources to cause the unverifiable disutility?"

That an entity with such a utility function exists would manage to stick around long enough in the first place itself may drop the probabilities by a whole lot.

Perhaps best to restrict ourselves to the case of the disutility being verifiable, but only after the fact. (Has this agent ever pulled this soft of thing before? etc..... (read more)

Eliezer> Is the value of my existence steadily shrinking as the universe expands and it requires more information to locate me in space?

Yes, but the value of everyone else's existence is shrinking by the same factor, so it doesn't disturb the preference ordering among possible courses of actions, as far as I can see.

Eliezer> If I make a large uniquely structured arrow pointing at myself from orbit so that a very simple Turing machine can scan the universe and locate me, does the value of my existence go up?

This is a more serious problem for my propos... (read more)

I'll respond to a couple of other points I skipped over earlier.

Eliezer> It's hard to see why I would consider this the right thing to do - where does this mysterious "measure" come from?

Suppose you plan to measure the polarization of a photon at some future time and thereby split the universe into two branches of unequal weight. You do not treat people in these two branches as equals, but instead value the people in the higher-weight branch more, right? Can you answer why you consider that to be the right thing to do? That's not a rhetorical ... (read more)

Maybe the origin of the paradox is that we are extending the principle of maximizing expected return beyond its domain of applicability. Unlike Bayes formula, which is an unassailable theorem, the principle of maximizing expected return is perhaps just a model of rational desire. As such it could be wrong. When dealing with reasonably high probabilities, the model seems intuitively right. With small probabilities it seems to be just an abstraction, and there is not much intuition to compare it to. When considering a game with positive expected return that ... (read more)

Wei: You do not treat people in these two branches as equals, but instead value the people in the higher-weight branch more, right? Can you answer why you consider that to be the right thing to do?

Robin Hanson's guess about mangled worlds seems very elegant to me, since it means that I can run a (large) computer with conventional quantum mechanics programmed into it, no magic in its transistors, and the resulting simulation will contain sentient beings who experience the same probabilities we do.

Even so, I'd have to confess myself confused about why I find myself in a simple universe rather than a noisy one.

0pnrjulius
How come we keep talking about mangled worlds and multiverses... when the Bohm interpretation actually derives the Born probabilities as a stable equilibrium of the quantum potential? In one theory, we have this mysterious thing that no one is sure how to solve... and in the other theory, we have a solution right in front of us. Also, Bohmian mechanics, while nonlocal, does not require us to believe in mysterious inaccessible universes where our measurements turned out differently.
[-]CC-30

Not all infinities are equal, there exists a hierarchy. Look at real numbers versus integers.

kthxbye

[-]g00

Stephen, no problem. Incidentally, I share your doubt about the optimality of optimizing expected utility (though I wonder whether there might be a theorem that says anything coherent can be squeezed into that form).

CC, indeed there are many infinities (not merely infinitely many, not merely more than we can imagine, but more than we can describe), but so what? Any sort of infinite utility, coupled with a nonzero finite probability, leads to the sort of difficulty being contemplated here. Higher infinities neither help with this nor make it worse, so far a... (read more)

I have a paper which explores the problem in a somewhat more general way (but see especially section 6.3).

Infinite Ethics: http://www.nickbostrom.com/ethics/infinite.pdf

People have been talking about assuming that states with many people hurt have a low (prior) probability. It might be more promising to assume that states with many people hurt have a low correlation with what any random person claims to be able to effect.

Eliezer, I think Robin's guess about mangled worlds is interesting, but irrelevant to this problem. I'd guess that for you, P(mangled worlds is correct) is much smaller than P(it's right that I care about people in proportion to the weight of the branches they are in). So Robin's idea can't explain why you think that is the right thing to do.

Nick, your paper doesn't seem to mention the possibility of discounting people by their algorithmic complexity. Is that an option you considered?

Pascal's wager type arguments fail due to their symmetry (which is preserved in finite cases).

Even if our priors are symmetric for equally complex religious hypotheses, our posteriors almost certainly won't be. There's too much evidence in the world, and too many strong claims about these matters, for me to imagine that posteriors would come out even. Besides, even if two religions are equally probable, there may be certainly be non-epistemic reasons to prefer one over the other.

However, if after chugging through the math, it didn't balance out and still t... (read more)

Even if there is nobody currently making a bignum-level threat, maybe the utility-maximizing thing to do is to devote substantial resources to search for low-probability, high-impact events and stop or encourage them depending on the utility effect. After all, you can't say the probability of every possibility as bad as killing 3^^^^3 people is zero.

Nick Tarleton,
Yes, it is probably correct that one should devote substantial resources to low probability events, but what are the odds that the universe is not only a simulation, but that the containing world is much bigger; and, if so, does the universe just not count, because it's so small? The bounded utility function probably reaches the opposite conclusion that only this universe counts, and maybe we should keep our ambitions limited, out of fear of attracting attention.

"I find myself in a simple world rather than a noisy one."
Care to expand on that?

Robin: Great point about states with many people having low correlations with what one random person can effect. This is fairly trivially provable.

Utilitarian: Equal priors due to complexity, equal posteriors due to lack of entanglement between claims and facts.

Wei Dai, Eliezer, Stephen, g: This is a great thread, but it's getting very long, so it seems likely to be lost to posterity in practice. Why don't the three of you read the paper Neel Krishnaswami referenced, have a chat, and post it on the blog, possibly edited, as a main post?

"The p... (read more)

It might be more promising to assume that states with many people hurt have a low correlation with what any random person claims to be able to effect.

Robin: Great point about states with many people having low correlations with what one random person can effect. This is fairly trivially provable.

Aha!

For some reason, that didn't click in my mind when Robin said it, but it clicked when Vassar said it. Maybe it was because Robin specified "many people hurt" rather than "many people", or because Vassar's part about being "provable" caused me to actually look for a reason. When I read Robin's statement, it came through as just "Arbitrarily penalize probabilities for a lot of people getting hurt."

But, yes, if you've got 3^^^^3 people running around they can't all have sole control over each other's existence. So in a scenario where lots and lots of people exist, one has to penalize by a proportional factor the probability that any one person's binary decision can solely control the whole bunch.

Even if the Matrix-claimant says that the 3^^^^3 minds created will be unlike you, with information that tells them they're powerless, if you're in a generalized scenario where anyone has and uses that kind of power, the vast majority of mind-instantiations are in leaves rather than roots.

This seems to me to go right to the root of the problem, not a full-fledged formal answer but it feels right as a starting point. Any objections?

-1taryneast
This seems intuitively plausible. The more outrageous the claim, the correspondingly less plausible is their ability to pull it off. Especially when you evaluate the amount of resources they are demanding vs the number of resources that you would expect their implausibly difficult plan would require to be achieved.
3Will_Sawin
That's not the point. None of those probabilities are as strong as 3^^^3. Maybe big, buy not THAT big. The point is that no more than 1/3^^^3 people have sole control over the life or death of 3^^3 people. This improbability, that you would be one of those very special people, IS big enough. (This answer fails unless your ethics and anthropics use the same measure. That's how the pig example works.)
1wedrifid
I was about to express mild amusement about how cavalier we are with jumping to, from and between numbers like 3^^^^3 and 3^^^3. I had to squint to tell the difference. Then it occurred to me that: 3^^3 is not even unimaginably big, Knuth arrows or no. It's about 1/5th the number of people that can fit in the MCG.
2Will_Sawin
Being cavalier with proofreading =/= being cavalier with number size. But that is indeed amusing.
0wedrifid
Well, I didn't want to declare a proofreading error because 3^^^3 does technically fit correctly in the context, even if you may not have meant it. ;) I was thinking the fact that we are so cavalier makes it easier to slip between them if not paying close attention. Especially since 3^^^3 is more commonly used than 3^^^^3. I don't actually recall Eliezer going beyond pentation elsewhere. I know if I go that high I tend to use 4^^^^4. It appeals more aesthetically and is more clearly distinct. Mind you it isn't nearly as neat as 3^^^3 given that 3^^^3 can also be written and visualized conceptually as 3 -> 3 -> 3 while 4^^^^4 is just 4 -> 4 -> 4 not 4 -> 4 -> 4 -> 4.
0taryneast
So you're saying that the implausibility is that I'd run into a person that just happened to have that level of "power" ? Is that different in kind to what I was saying? If I find it implausible that the person I'm speaking to can actually do what they're claiming, is that not the same as it being implausible that I happen to have met a person that can do what this person is claiming/ (leaving aside the resource-question which is probably just my rationalisation as to why I think he couldn't pull it off). Basically I'm trying to taboo the actual BigNum... and trying to fit the concepts around in my head.
0Will_Sawin
It's implausible that you're the person with that power. We could easily imagine a world in which everyone runs into a single absurdly powerful person. We could not imagine a world in which everyone was absurdly powerful (in their ability to control other people), because then multiple people would have control over the same thing. If you knew that he had the power, but that his action wasn't going to depend on yours, then you wouldn't give him the money. So you're only concerned with the situation where you have the power.
1taryneast
Ok, sure thing. I get what you're saying. I managed to encompass that implausibility also into the arguments I made in my restatement anyway, but yeah, I agree that these are different kinds of "unlikely thing"
-1taryneast
In fact... let me restate what I think I was trying to say. The mugger is making an extraordinary claim. One for which he has provided no evidence. The amount of evidence required to make me believe that his claim is possible, grows at the same proportion as the size of his claim. Think about it at the lower levels of potential claims. 1) If he claimed to be able to kill one person - I'd believe that he was capable of killing one person. I'd then weigh that against the likelihood that he'd pick me to blackmail, and the low blackmail amount that he'd picked... and consider it more likely that he's lying to make a fast buck, than that he actually has a hostage somewhere ready to kill. 2) If he claimed to be able to kill 3^3 people, I'd consider it plausible... with a greatly diminished likelihood. I'd have to weigh the evidence that he was a small-time terrorist, willing to take the strong risk of being caught while preparing to blow up a buildings-worth of people... or to value his life so low as to actually do it and die in the process. It's not very high, but we've all seen people like this in our lifetime both exist and carry out this threat. So it's "plausible but extremely unlikely". The likelihood that I've: a) happened to run into one of these rare people and b) that he'd pick me (pretty much a nobody) to blackmail combine to be extremely unlikely... and I'd reckon that those two, balanced against the much higher prior likelihood that he's just a con-artist, would fairly well cancel out against the actual value of a buildings-worth of people. Especially when you consider that the resources to do this would far outweigh the money he's asked for. As far as I know about people wiling to kill large numbers of people - most of them do it for a reason, and that reason is almost never a paltry amount of cash. It's still possible... after all the school-killers have done crazy stunts to kill people for a tiny reason... but usually there's fame or revenge involv
-1Will_Sawin
3+3, 3*3, 3^3, 3^^3, 3^^^3, etc. grows much faster than exponentially. a^b, for any halfway reasonable a and b, can't touch 3^^^3 3^^^3=3^^(3^^3)=3^^(7625597484987)=3^(3^^(7625597484986)) It's not an exponential, it's a huge, huge tower of exponentials. It is simply too big for that argument to work.
-1taryneast
Yes, I should not have used the word exponential... but I don't know the word for "grows at a rate that is a tower of exponentials"... "hyperexponential" perhaps? However - I consider that my argument still holds. That the evidence required grows at the same rate as the size of the claim. The evidence must be of equal value to the claim. (from "extraordinary claims require extraordinary evidence") My point in explaining the lower levels is that is that we don't demand evidence from most claimants of small amounts of damage because we've already seen evidence that these threats are plausible. But if we start getting to the "hyperexponential" threats, we hit a point where we suddenly realise that there is no evidence supporting the plausibility of the claim... so we automatically assume that the person is a crank.
-1Vaniver
3^^3 is a thousand times larger than the number of people currently alive.
-1taryneast
oops, yes I mixed up 3^^3 with 3^^^3 Ok, so skip step 3 and move straight on to 4 ;)
0steven0461
So can we solve the problem by putting some sort of upper bound on the degree to which ethics and anthropics can differ, along the lines of "creation of 3^^^^3 people is at most N times less probable than creation of 3^^^^3 pigs, so across the ensemble of possible worlds the prior against your being in a position to influence that many pigs still cuts down the expected utility from something vaguely like 3^^^^3 to something vaguely like N"?
2Ben123
Is that a general solution? What about this: "Give me five dollars or I will perform an action, the disutility of which will be equal to twice that of you giving me five dollars, multiplied by the reciprocal of the probability of this statement being true."
1A1987dM
Well, I'd rather lose twenty dollars than be kicked in the groin very hard, and the probability of you succeeding in doing that given you being close enough to me and trying to do so is greater than 1/2, so...
0MugaSofer
But anthropically, since you exist within the matrix, and so does he, and hostages outside the matrix cannot reach you to make such an offer ... You don't have 3^^^^3 people "running around". You have the population of earth running around, plus one matrix lord. More to the point, if you create 3^^^^3 people, surely a LOT of them are going to be identical, purely by coincidence? Aren't you double-counting most of them?

Robin's anthropic argument seems pretty compelling in this example, now that I understand it. It seems a little less clear if the Matrix-claimant tried to mug you with a threat not involving many minds. For example, maybe he could claim that there exists some giant mind, the killing of which would be as ethically significant as the killing of 3^^^^3 individual human minds? Maybe in that case you would anthropically expect with overwhelmingly high probability to be a figment inside the giant mind.

I think that Robin's point solves this problem, but doesn't solve the more general problem of an AGI's reaction to low probability high utility possibilities and the attendant problems of non-convergence.
The guy with the button could threaten to make an extra-planar factory farm containing 3^^^^^3 pigs instead of killing 3^^^^3 humans. If utilities are additive, that would be worse.

The guy with the button could threaten to make an extra-planar factory farm containing 3^^^^^3 pigs instead of killing 3^^^^3 humans. If utilities are additive, that would be worse.

Congratulations, you made my brain asplode.

0MugaSofer
Once again, vegetarians win at morality.

3^^^^^^3 copies of that brain, fates all dependent on the original pondering this thread.

All fates equal, I think their incentive to solve the mystery equals that for one alone.

Eliezer, what if the mugger (Matrix-claimant) also says that he is the only person who has that kind of power, and he knows there is just one copy of you in the whole universe? Is the probability of that being true less than 1/3^^^^3?

Don't dollars have an infinite expected value (in human lives or utility) anyway, especially if you take into account weird low-probability scenarios? Maybe the next mugger will make even bigger threats.

2taryneast
next mugger? There's a distinctly high probability that this mugger will return with higher blackmail demands.

Even if the Matrix-claimant says that the 3^^^^3 minds created will be unlike you, with information that tells them they're powerless, if you're in a generalized scenario where anyone has and uses that kind of power, the vast majority of mind-instantiations are in leaves rather than roots.

You would have to abandon Solomonoff Induction (or modify it to account for these anthropic concerns) to make this work. Solomonoff Induction doesn't let you consider just "generalized scenarios"; you have to calculate each one in turn, and eventually one of the... (read more)

Michael, your pig example threw me into a great fit of belly-laughing. I guess that's what my mind look likes when it explodes. And I recall that was Marvin Minsky's prediction in Society of Minds.

You would have to abandon Solomonoff Induction (or modify it to account for these anthropic concerns) to make this work.

To be more specific, you would have to alter it in such a way that it accepted Brandon Carter's Doomsday Argument.

"Congratulations, you made my brain asplode."

Read http://www.spaceandgames.com/?p=22 if you haven't already. Your utility function should not be assigning things arbitrarily large additive utilities, or else you get precisely this problem (if pigs qualify as minds, use rocks), and your function will sum to infinity. If you "kill" by destroying the exact same information content over and over, it doesn't seem to be as bad, or even bad at all. If I made a million identical copies of you, froze them into complete stasis, and then shot 999,... (read more)

Wei, no I don't think I considered the possibility of discounting people by their algorithmic complexity.

I can see that in the context of Everett it seems plausible to weigh each observer with a measure proportional to the amplitude squared of the branch of the wave function on which he is living. Moreover, it seems right to use this measure both to calculate the anthropic probability of me finding myself as that observer and the moral importance of that observer's well-being.

Assigning anthropic probabilities over infinite domains is problematic. I don't... (read more)

It seems like this may be another facet of the problem with our models of expected utility in dealing with very large numbers. For instance, do you accept the Repugnant conclusion?

I'm at a loss for how to model expected utility in a way that doesn't generate the repugnant conclusion, but my suspicion is that if someone finds it, this problem may go away as well.

Or not. It seems that our various heuristics and biases against having correct intuitions about very large and small numbers are directly tied up in producing a limiting framework that acts as a... (read more)