The most common formalizations of Occam's Razor, Solomonoff induction and Minimum Description Length, measure the program size of a computation used in a hypothesis, but don't measure the running time or space requirements of the computation.  What if this makes a mind vulnerable to finite forms of Pascal's Wager?  A compactly specified wager can grow in size much faster than it grows in complexity.  The utility of a Turing machine can grow much faster than its prior probability shrinks.

Consider Knuth's up-arrow notation:

  • 3^3 = 3*3*3 = 27
  • 3^^3 = (3^(3^3)) = 3^27 = 3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3 = 7625597484987
  • 3^^^3 = (3^^(3^^3)) = 3^^7625597484987 = 3^(3^(3^(... 7625597484987 times ...)))

In other words:  3^^^3 describes an exponential tower of threes 7625597484987 layers tall.  Since this number can be computed by a simple Turing machine, it contains very little information and requires a very short message to describe.  This, even though writing out 3^^^3 in base 10 would require enormously more writing material than there are atoms in the known universe (a paltry 10^80).

Now suppose someone comes to me and says, "Give me five dollars, or I'll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills 3^^^^3 people."

Call this Pascal's Mugging.

"Magic powers from outside the Matrix" are easier said than done - we have to suppose that our world is a computing simulation run from within an environment that can afford simulation of arbitrarily large finite Turing machines, and that the would-be wizard has been spliced into our own Turing tape and is in continuing communication with an outside operator, etc.

Thus the Kolmogorov complexity of "magic powers from outside the Matrix" is larger than the mere English words would indicate.  Therefore the Solomonoff-inducted probability, two to the negative Kolmogorov complexity, is exponentially tinier than one might naively think.

But, small as this probability is, it isn't anywhere near as small as 3^^^^3 is large.  If you take a decimal point, followed by a number of zeros equal to the length of the Bible, followed by a 1, and multiply this unimaginably tiny fraction by 3^^^^3, the result is pretty much 3^^^^3.

Most people, I think, envision an "infinite" God that is nowhere near as large as 3^^^^3.  "Infinity" is reassuringly featureless and blank.  "Eternal life in Heaven" is nowhere near as intimidating as the thought of spending 3^^^^3 years on one of those fluffy clouds.  The notion that the diversity of life on Earth springs from God's infinite creativity, sounds more plausible than the notion that life on Earth was created by a superintelligence 3^^^^3 bits large.  Similarly for envisioning an "infinite" God interested in whether women wear men's clothing, versus a superintelligence of 3^^^^3 bits, etc.

The original version of Pascal's Wager is easily dealt with by the gigantic multiplicity of possible gods, an Allah for every Christ and a Zeus for every Allah, including the "Professor God" who places only atheists in Heaven.   And since all the expected utilities here are allegedly "infinite", it's easy enough to argue that they cancel out.  Infinities, being featureless and blank, are all the same size.

But suppose I built an AI which worked by some bounded analogue of Solomonoff induction - an AI sufficiently Bayesian to insist on calculating complexities and assessing probabilities, rather than just waving them off as "large" or "small".

If the probabilities of various scenarios considered did not exactly cancel out, the AI's action in the case of Pascal's Mugging would be overwhelmingly dominated by whatever tiny differentials existed in the various tiny probabilities under which 3^^^^3 units of expected utility were actually at stake.

You or I would probably wave off the whole matter with a laugh, planning according to the dominant mainline probability:  Pascal's Mugger is just a philosopher out for a fast buck.

But a silicon chip does not look over the code fed to it, assess it for reasonableness, and correct it if not.  An AI is not given its code like a human servant given instructions.  An AI is its code.  What if a philosopher tries Pascal's Mugging on the AI for a joke, and the tiny probabilities of 3^^^^3 lives being at stake, override everything else in the AI's calculations?   What is the mere Earth at stake, compared to a tiny probability of 3^^^^3 lives?

How do I know to be worried by this line of reasoning?  How do I know to rationalize reasons a Bayesian shouldn't work that way?  A mind that worked strictly by Solomonoff induction would not know to rationalize reasons that Pascal's Mugging mattered less than Earth's existence.  It would simply go by whatever answer Solomonoff induction obtained.

It would seem, then, that I've implicitly declared my existence as a mind that does not work by the logic of Solomonoff, at least not the way I've described it.  What am I comparing Solomonoff's answer to, to determine whether Solomonoff induction got it "right" or "wrong"?

Why do I think it's unreasonable to focus my entire attention on the magic-bearing possible worlds, faced with a Pascal's Mugging?  Do I have an instinct to resist exploitation by arguments "anyone could make"?  Am I unsatisfied by any visualization in which the dominant mainline probability leads to a loss?  Do I drop sufficiently small probabilities from consideration entirely?  Would an AI that lacks these instincts be exploitable by Pascal's Mugging?

Is it me who's wrong?  Should I worry more about the possibility of some Unseen Magical Prankster of very tiny probability taking this post literally, than about the fate of the human species in the "mainline" probabilities?

It doesn't feel to me like 3^^^^3 lives are really at stake, even at very tiny probability.  I'd sooner question my grasp of "rationality" than give five dollars to a Pascal's Mugger because I thought it was "rational".

Should we penalize computations with large space and time requirements?  This is a hack that solves the problem, but is it true? Are computationally costly explanations less likely?  Should I think the universe is probably a coarse-grained simulation of my mind rather than real quantum physics, because a coarse-grained human mind is exponentially cheaper than real quantum physics?  Should I think the galaxies are tiny lights on a painted backdrop, because that Turing machine would require less space to compute?

Given that, in general, a Turing machine can increase in utility vastly faster than it increases in complexity, how should an Occam-abiding mind avoid being dominated by tiny probabilities of vast utilities?

If I could formalize whichever internal criterion was telling me I didn't want this to happen, I might have an answer.

I talked over a variant of this problem with Nick Hay, Peter de Blanc, and Marcello Herreshoff in summer of 2006.  I don't feel I have a satisfactory resolution as yet, so I'm throwing it open to any analytic philosophers who might happen to read Overcoming Bias.

Pascal's Mugging: Tiny Probabilities of Vast Utilities
New Comment
355 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Why would not giving him $5 make it more likely that people would die, as opposed to less likely? The two would seem to cancel out. It's the same old "what if we are living in a simulation?" argument- it is, at least, possible that me hitting the sequence of letters "QWERTYUIOP" leads to a near-infinity of death and suffering in the "real world", due to AGI overlords with wacky programming. Yet I do not refrain from hitting those letters, because there's no entanglement which drives the probabilities in that direction as opposed to some other random direction; my actions do not alter the expected future state of the universe. You could just as easily wind up saving lives as killing people.

Because he said so, and people tend to be true to their word more often than dictated by chance.

-1Normal_Anomaly
That observation applies to humans, who also tend not to kill large numbers of people for no payoff (that is, if you've already refused the money and walked away).
1Will_Sawin
That's a symmetric effect, though.
7DanielLC
Yes, but they're more likely to kill large numbers of people conditional on you not doing what they say than conditional on you doing what they say.
9Strange7
The mugger claims to not be a 'person' in the conventional sense, but rather an entity with outside-Matrix powers. If this statement is true, then generalized observations about the reference class of 'people' cannot necessarily be considered applicable. Conversely, if it is false, then this is not a randomly-selected person, but rather someone who has started off the conversation with an outrageous profit-motivated lie, and as such cannot be trusted.

They claim to not be a human. They're still a person, in the sense of a sapient being. As a larger class, you'd expect lower correlation, but it would still be above zero.

-3Strange7
I am not convinced that, even among humans speaking to other humans, truth-telling can be assumed when there is such a blatantly obvious incentive to lie. I mean, say there actually is someone who can destroy vast but currently-unobservable populations with less effort than it would take them to earn $5 with conventional economic activity, and the ethical calculus works out such that you'd be better served to pay them $5 than let it happen. At that point, aren't they better served to exaggerate their destructive capacity by an order of magnitude or two, and ask you for $6? Or $10? Once the number the mugger quotes exceeds your ability to independently confirm, or even properly imagine, the number itself becomes irrelevant. It's either a display of incomprehensibly overwhelming force, to which you must submit utterly or be destroyed, or a bluff you should ignore.
6DanielLC
There is no blatantly obvious reason to want to torture the people only if you do give him money. So, you're saying that the problem is that, if they really were going to kill 3^^^3 people, they'd lie? Why? 3^^^3 isn't just enough to get $5. It's enough that the expected seriousness of the threat is unimaginably large. Look at it this way: If they're going to lie, there's no reason to exaggerate their destructive capacity by an order of magnitude when they can just make up a number. If they choose to make up a number, 3^^^3 is plenty high. As such, if it really is 3^^^3, they might as well just tell the truth. If there's any chance that they're not lying given that they really can kill 3^^^3 people, their threat is valid. It's one thing to be 99.9% sure they're lying, but here, a 1 - 1/sqrt(3^^^3) certainty that they're lying still gives more than enough doubt for an unimaginably large threat. You're not psychic. You don't know which it is. In this case, the risk of the former is enough to overwhelm the larger probability of the latter.
1Strange7
Not the way I do the math. Let's say you're a sociopath, that is, the only factors in your utility function are your own personal security and happiness. Two unrelated people approach you simultaneously, one carrying a homemade single-shot small-caliber pistol (a 'zip gun') and the other apparently unarmed. Both of them, separately, demand $10 in exchange for not killing you immediately. You've got a $20 bill in your wallet; the unarmed mugger, upon learning this, obligingly offers to make change. While he's thus distracted, you propose to the mugger with the zip gun that he shoot the unarmed mugger, and that the two of you then split the proceeds. The mugger with the zipgun refuses, explaining that the unarmed mugger claims to be close personal friends with a professional sniper, who is most likely observing this situation from a few hundred yards away through a telescopic sight and would retaliate against anyone who hurt her friend the mugger. The mugger with the zip gun has never actually met the sniper or directly observed her handiwork, but is sufficiently detered by rumor alone. If you don't pay the zip-gun mugger, you'll definitely get shot at, but only once, and with good chances of a miss or nonfatal injury. If you don't pay the unarmed mugger, and the sniper is real, you will almost certainly die before you can determine her position or get behind sufficiently hard cover. If you pay them both, you will have to walk home through a bad part of town at night instead of taking the quicker-and-safer bus, which apart from the inconvenience might result in you being mugged a third time. How would you respond to that? I don't need to be psychic. I just do the math. Taking any sort of infinitessimally-unlikely threat so seriously that it dominates my decisionmaking means anyone can yank my chain just by making a few unfounded assertions involving big enough numbers, and then once word gets around, the world will no longer contain acceptable outcomes.
7DanielLC
In your example, only you die. In Pascal's mugging, it's unimaginably worse. Do you accept that, in the circumstance you gave, you are more likely to be shot by a sniper if you only pay one mugger? Not significantly more likely, but still more likely? If so, that's analogous to accepting that Pascal's mugger will be more likely to make good on his threat if you don't pay.
0Strange7
In my example, the person making the decision was specified to be a sociopath, for whom there is no conceivable worse outcome than the total loss of personal identity and agency associated with death. The two muggers are indifferent to each other's success. You could pay off the unarmed mugger to eliminate the risk of being sniped (by that particular mugger's friend, at least, if she exists; there may well be other snipers elsewhere in town with unrelated agendas, about whom you have even less information) and accept the risk of being shot with the zip gun, in order to afford the quicker, safer bus ride home. In that case you would only be paying one mugger, and still have the lowest possible sniper-related risk. The three possible expenses were meant as metaphors for existential risk mitigation (imaginary sniper), infrastructure development (bus), and military/security development (zip gun), the latter two forming the classic guns-or-butter economic dilemma. Historically speaking, societies that put too much emphasis, too many resources, toward preventing low-probability high-impact disasters, such as divine wrath, ended up succumbing to comparatively banal things like famine, or pillaging by shorter-sighted neighbors. What use is a mathematical model of utility that would steer us into those same mistakes?
3DanielLC
Is your problem that we'd have to keep the five dollars in case of another mugger? I'd hardly consider the idea of steering our life around pascal's mugging to be disagreeing with it. For what it's worth, if you look for hypothetical pascal's muggings, expected utility doesn't converge and decision theory breaks down.

Let's say you're a sociopath, that is, the only factors in your utility function are your own personal security and happiness.

Can we use the less controversial term 'economist'?

7Relenzo
I think this answer contains something important-- Not so much an answer to the problem, but a clue to the reason WHY we intuitively, as humans, know to respond in a way which seems un-mathematical. It seems like a Game Theory problem to me. Here, we're calling the opponents' bluff. If we make the decision that SEEMINGLY MAXIMIZES OUR UTILITY, according to game theory we're set up for a world of hurt in terms of indefinite situations where we can be taken advantage of. Game Theory already contains lots of situations where reasons exist to take action that seemingly does not maximize your own utility.
0RST
It is threatening people just to test you. We can assume that Its behavior is completely different from ours. So Tom's argument still works.
0MrCheeze
Yes, but the chance of magic powers from outside the matrix is low enough that what he says has an insignificant difference. ...or is an insignificant difference even possible?
2DanielLC
The chance of magic powers from outside the matrix is nothing compared to 3^^^^3. It makes no difference in whether or not it's worth while to pay him.
-2Dmytry
excellent point, sir.
0TraderJoe
[comment deleted]

Very interesting thought experiment!

One place where it might fall down is that our disutility for causing deaths is probably not linear in the number of deaths, just as our utility for money flattens out as the amount gets large. In fact, I could imagine that its value is connected to our ability to intuitively grasp the numbers involved. The disutility might flatten out really quickly so that the disutility of causing the death of 3^^^^3 people, while large, is still small enough that the small probabilities from the induction are not overwhelmed by it.

3DanielLC
That just means you have to change the experiment. Suppose he just said he'll cause a certain amount of net disutility, without specifying how. This works unless you assume a maximum possible disutility.
6Ulysses
You are not entitled to assume a maximum disutility, even if you think you see a proof for it (see Confidence Levels Inside and Outside an Argument).
5themusicgod1
link for the lazy

People say the fact that there are many gods neutralizes Pascal’s wager - but I don't understand that at all. It seems to be a total non sequetor. Sure, it opens the door to other wagers being valid, but that is a different issue.

Lets say I have a simple game against you where, if I choose 1 I win a lotto ticket and if I choose 0 I loose. There is also a number of other games tables around the room with people winning or not winning lotto tickets. If I want to win the lotto, what number should I pick?

Also I don't tink there is a fundimental issue with havi... (read more)

4Dojan
There is one problem with having favor of several gods simultaneously: In fact, one could argue that being a true orthodox christian would lead you to the muslim, hindu, protestant and scientology (etc.) hells, while choosing anyone of them would subtract that hell but add the hell of whatever religion you left... I try to stay away for safety's sake :) [edit: spelling]

This is an instance of the general problem of attaching a probability to matrix scenarios. And you can pascal-mug yourself, without anyone showing up to assert or demand anything - just think: what if things are set up so that whether I do, or do not do, something, determines whether those 3^^^^3 people will be created and destroyed? It's just as possible as the situation in which a messenger from Outside shows up and tells you so.

The obvious way to attach probabilities to matrix scenarios is to have a unified notion of possible world capacious enough to e... (read more)

Tom and Andrew, it seems very implausible that someone saying "I will kill 3^^^^3 people unless X" is literally zero Bayesian evidence that they will kill 3^^^^3 people unless X. Though I guess it could plausibly be weak enough to take much of the force out of the problem.

Andrew, if we're in a simulation, the world containing the simulation could be able to support 3^^^^3 people. If you knew (magically) that it couldn't, you could substitute something on the order of 10^50, which is vastly less forceful but may still lead to the same problem.

Andrew and Steve, you could replace "kill 3^^^^3 people" with "create 3^^^^3 units of disutility according to your utility function". (I respectfully suggest that we all start using this form of the problem.)

Michael Vassar has suggested that we should consider any number of identical lives to have the same utility as one life. That could be a solution, as it's impossible to create 3^^^^3 distinct humans. But, this also is irrelevant to the create-3^^^^3-disutility-units form.

IIRC, Peter de Blanc told me that any consistent utility function must have an upper bound (meaning that we must discount lives like Steve sug... (read more)

create 3^^^^3 units of disutility according to your utility function

For all X:

If your utility function assigns values to outcomes that differ by a factor of X, then you are vulnerable to becoming a fanatic who banks on scenarios that only occur with probability 1/X. As simple as that.

If you think that banking on scenarios that only occur with probability 1/X is silly, then you have implicitly revealed that your utility function only assigns values in the range [1,Y], where Y<X, and where 1 is the lowest utility you assign.

6Nick_Tarleton
... or your judgments of silliness are out of line with your utility function.
7SforSingularity
When I said "Silly" I meant from an axiological point of view, i.e. you think the scenario over, and you still think that you would be doing something that made you win less. Of course in any such case, there are likely to be conflicting intuitions: one to behave as an aggregative consequentialist, and the another to behave like a sane human being.
0[anonymous]
What if we required that the utility function grow no faster than the Kolmogorov complexity of the scenario? This seems like a suitable generalization of Vassar's proposal.

Mitchell, it doesn't seem to me like any sort of accurate many-worlds probability calculation would give you a probability anywhere near low enough to cancel out 3^^^^3. Would you disagree? It seems like there's something else going on in our intuitions. (Specifically, our intuitions that an good FAI would need to agree with us on this problem.)

Sorry, the first link was supposed to be to Absence of Evidence is Evidence of Absence.

Mitchell, I don't see how you can Pascal-mug yourself. Tom is right that the possibility that typing QWERTYUIOP will destroy the universe can be safely ignored; there is no evidence either way, so the probability equals the prior, and the Solomonoff prior that typing QWERTYUIOP will save the universe is, as far as we know, exactly the same. But the mugger's threat is a shred of Bayesian evidence that you have to take into account, and when you do, it massively tips the expected utility balance. Your suggested solution does seem right but utterly intractable.

[-]DSimon130

I don't think the QWERTYUIOP thing is literally zero Bayesian evidence either. Suppose the thought of that particular possibility was manually inserted into your mind by the simulation operator.

Tom and Andrew, it seems very implausible that someone saying "I will kill 3^^^^3 people unless X" is literally zero Bayesian evidence that they will kill 3^^^^3 people unless X. Though I guess it could plausibly be weak enough to take much of the force out of the problem.

Nothing could possibly be that weak.

Tom is right that the possibility that typing QWERTYUIOP will destroy the universe can be safely ignored; there is no evidence either way, so the probability equals the prior, and the Solomonoff prior that typing QWERTYUIOP will save the universe is, as far as we know, exactly the same.

Exactly the same? These are different scenarios. What happens if an AI actually calculates the prior probabilities, using a Solomonoff technique, without any a priori desire that things should exactly cancel out?

5Strange7
Why would an AI consider those two scenarios and no others? Seems more likely it would have to chew over every equivalently-complex hypothesis before coming to any actionable conclusion... at which point it stops being a worrisome, potentially world-destroying AI and becomes a brick, with a progress bar that won't visibly advance until after the last proton has decayed.
1Arandur
... which doesn't solve the problem, but at least that AI won't be giving anyone... five dollars? Your point is valid, but it doesn't expand on anything.
-2Strange7
More generally I mean that an AI capable of succumbing to this particular problem wouldn't be able to function in the real world well enough to cause damage.
-2Arandur
I'm not sure that was ever a question. :3
6ialdabaoth
Well, let's think about this mathematically. In other articles, you have discussed the notion that, in an infinite universe, there exist with probability 1 identical copies of me some 10^(10^29) {span} away. You then (correctly, I think) demonstrate the absurdity of declaring that one of them in particular is 'really you' and another is a 'mere copy'. When you say "3^^^^3 people", you are presenting me two separate concepts: 1. Individual entities which are each "people". 2. A set {S} of these entities, of which there are 3^^^^3 members. Now, at this point, I have to ask myself: "what is the probability that {S} exists?" By which I mean, what is the probability that there are 3^^^^3 unique configurations, each of which qualifies as a self-aware, experiencing entity with moral weight, without reducing to an "effective simulation" of another entity already counted in {S}? Vs. what is the probability that the total cardinality of unique configurations that each qualify as self-aware, experiencing entities with moral weight, is < 3^^^^3? Because if we're going to juggle Bayesian probabilities here, at some point that has to get stuck in the pipe and smoked, too.

OK, let's try this one more time:

  1. Even if you don't accept 1 and 2 above, there's no reason to expect that the person is telling the truth. He might kill the people even if you give him the $5, or conversely he might not kill them even if you don't give him the $5.

To put it another way, conditional on this nonexistent person having these nonexistent powers, why should you be so sure that he's telling the truth? Perhaps you'll only get what you want by not giving him the $5. To put it mathematically, you're computing pX, where p is the probability and ... (read more)

I have to go with Tom McGabe on this one; This is just a restatement of the core problem of epistemology. It's not unique to AI, either.

3. Even if you don't accept 1 and 2 above, there's no reason to expect that the person is telling the truth. He might kill the people even if you give him the $5, or conversely he might not kill them even if you don't give him the $5.

But if a Bayesian AI actually calculates these probabilities by assessing their Kolmogorov complexity - or any other technique you like, for that matter - without desiring that they come out exactly equal, can you rely on them coming out exactly equal? If not, an expected utility differential of 2 to the negative googolplex times 3^^^^3 still equals 3^^^^3, so whatever tiny probability differences exist will dominate all calculations based on what we think of as the "real world" (the mainline of probability with no wizards).

if you have the imagination to imagine X to be super-huge, you should be able to have the imagination to imagine p to be super-small

But we can't just set the probability to anything we like. We have to calculate it, and Kolmogorov complexity, the standard accepted method, will not be anywhere near that super-small.

Addendum: In computational terms, you can't avoid using a 'hack'. Maybe not the hack you described, but something, somewhere has to be hard-coded. How else would you avoid solipsism?

This case seems to suggest the existence of new interesting rationality constraints, which would go into choosing rational probabilities and utilities. It might be worth working out what constraints one would have to impose to make an agent immune to such a mugging.

Eliezer,

OK, one more try. First, you're picking 3^^^^3 out of the air, so I don't see why you can't pick 1/3^^^^3 out of the air also. You're saying that your priors have to come from some rigorous procedure but your utility comes from simply transcribing what some dude says to you. Second, even if for some reason you really want to work with the utility of 3^^^^3, there's no good reason for you not to consider the possibility that it's really -3^^^^3, and so you should be doing the opposite. The issue is not that two huge numbers will exactly cancel o... (read more)

1DanielLC
You're not picking 3^^^^3 out of the air. The other guy told you that number. You can't pick probabilities out of the air. If you could, why not just set the probability that you're God to one? With what probability? Would you give money to a mugger if their gun probably isn't loaded? Is this example fundamentally different?
1Kenny
I think you're on to something, but I think the key is that someone claiming being able to influence 3^^^^3 of anything, let alone 3^^^^3 "people", is such an extraordinary claim that it would require extraordinary evidence of a magnitude similar to 3^^^^3, i.e. I bet we're vastly underestimating the complexity of what our mugger is claiming.

pdf23ds, under certain straightforward physical assumptions, 3^^^^3 people wouldn't even fit in anyone's future light-cone, in which case the probability is literally zero. So the assumption that our apparent physics is the physics of the real world too, really could serve to decide this question. The only problem is that that assumption itself is not very reasonable.

Lacking for the moment a rational way to delimit the range of possible worlds, one can utilize what I'll call a Chalmers prior, which simply specifies directly how much time you will spend thi... (read more)

1pnrjulius
I'm not aware of any (and I'm not sure it really solves this problem in particular), but there should be, because processing time is absolutely critical to bounded rationality.
[-]bw2-20

Well... I think we act diffrently from the AI because we not only know Pascals Mugging, we know that it is known. I don't see why an AI could not know the knowledge of it, though, but you do not seem to consider that, which might simply show that it is not relevant, as you, er, seem to have given this some thought...

[-]bw2-30

But maybe an AI cannot in fact know the knowledge of something.

3Alsadius
What possible reason would you have to assume that? If we're talking about an actually intelligent AI, it'd presumably be as smart as any other intelligent being(like, say, a human). If we're talking about a dumb program, it can take into account anything that we want it to take into account.

Konrad: In computational terms, you can't avoid using a 'hack'. Maybe not the hack you described, but something, somewhere has to be hard-coded.

Well, yes. The alternative to code is not solipsism, but a rock, and even a rock can be viewed as being hard-coded as a rock. But we would prefer that the code be elegant and make sense, rather than using a local patch to fix specific problems as they come to mind, because the latter approach is guaranteed to fail if the AI becomes more powerful than you and refuses to be patched.

Andrew: You're saying that your... (read more)

[-]bw200

Well, are you going to give us your answer?

[-]Laura-10