## LESSWRONGLW

The decision algorithm that takes the whole pie is good at solving problems like this one: for each specific pie it gets it whole. Making the same action is not good for solving the different problem of dividing all possible pies simultaneously, but then the difference is reflected in the problem statement, and so the reasons that make it decide correctly for individual problems won't make it decide incorrectly for the joint problem.

I think it's right to cooperate in this thought experiment only to the extent that we accept the impossibility of isolating t... (read more)

I think it's right to cooperate in this thought experiment only to the extent that we accept the impossibility of isolating this thought experiment from its other possible instances, but then it should just motivate restating the thought experiment so as to make its expected actual scope explicit.

Agreed.

# A Problem About Bargaining and Logical Uncertainty

by Wei_Dai 1 min read21st Mar 201249 comments

# 26

Suppose you wake up as a paperclip maximizer. Omega says "I calculated the millionth digit of pi, and it's odd. If it had been even, I would have made the universe capable of producing either 1020 paperclips or 1010 staples, and given control of it to a staples maximizer. But since it was odd, I made the universe capable of producing 1010 paperclips or 1020 staples, and gave you control." You double check Omega's pi computation and your internal calculator gives the same answer.

Then a staples maximizer comes to you and says, "You should give me control of the universe, because before you knew the millionth digit of pi, you would have wanted to pre-commit to a deal where each of us would give the other control of the universe, since that gives you 1/2 probability of 1020 paperclips instead of 1/2 probability of 1010 paperclips."

Is the staples maximizer right? If so, the general principle seems to be that we should act as if we had precommited to a deal we would have made in ignorance of logical facts we actually possess. But how far are we supposed to push this? What deal would you have made if you didn't know that the first digit of pi was odd, or if you didn't know that 1+1=2?

On the other hand, suppose the staples maximizer is wrong. Does that mean you also shouldn't agree to exchange control of the universe before you knew the millionth digit of pi?

To make this more relevant to real life, consider two humans negotiating over the goal system of an AI they're jointly building. They have a lot of ignorance about the relevant logical facts, like how smart/powerful the AI will turn out to be and how efficient it will be in implementing each of their goals. They could negotiate a solution now in the form of a weighted average of their utility functions, but the weights they choose now will likely turn out to be "wrong" in full view of the relevant logical facts (e.g., the actual shape of the utility-possibility frontier). Or they could program their utility functions into the AI separately, and let the AI determine the weights later using some formal bargaining solution when it has more knowledge about the relevant logical facts. Which is the right thing to do? Or should they follow the staples maximizer's reasoning and bargain under the pretense that they know even less than they actually do?