Caveats: Dependency (Assumes truth of the arguments against perfect theoretical rationality made in the previous post), Controversial Definition (perfect rationality as utility maximisation, see previous thread)
This article is a follow up to: The Number Choosing Game: Against the existence of perfect theoretical rationality. It discusses the consequences of The Number Choosing Game, which is roughly, that you name the decimal representation of any number and you gain that much utility. It takes place in a theoretical world where there are no real world limitations in how large a number you can name or any costs. We can also assume that this game takes place outside of regular time, so there is no opportunity cost. Needless to say, this was all rather controversial.
Update: Originally I was trying to separate the consequences from the arguments, but it seems that this blog post slipped away from it.
What does this actually mean for the real world?
This was one of the most asked questions in the previous thread. I will answer this, but first I want to explain why I was reluctant to answer. I agree that it is often good to tell people what the real world consequences are as this isn't always obvious. Someone may miss out on realising how important an idea is if this isn't explained to them. However, I wanted to fight against the idea that people should always be spoonfed the consequences of every argument. A rational agent should have some capacity to think for themselves - maybe I tell you that the consequences are X, but they are actually Y. I also see a great deal of value from discussing the truth of ideas separate from the practical consequences. Ideally, everyone would be blind to the practical consequences when they were first discussing the truth of an idea as it would lead to a reduction in motivated reasoning.
The consequences of this idea are in one sense quite modest. If perfect rationality doesn't exist in at least some circumstances(edited), then if you want to assume it, you have to prove that a relevant class of problems has a perfectly rational agent. For example, if there are a finite number of options, each with a measurable, finite utility, we can assume that a perfectly rational agent exists. I'm sure we can prove that such agents exist for a variety of situations involving infinite options as well. However, there will also be some weird theoretical situations where it doesn't apply. This may seem irrelevant to some people, but if you are trying to understand strange theoretical situations, knowing that perfect rationality doesn't exist for some of these situations will allow you to provide an answer when someone hands you something unusual and says, "Solve this!". Now, I know my definition for rationality is controversial, but even if you don't accept it, it is still important to realise that the question, "What would a utility maximiser do?" doesn't always have an answer, as sometimes there is no maximum utility. Assuming perfectly rational agents as defined by utility maximisation is incredibly common in game theory and economics. This is helpful for a lot of situations, but after you've used this for a few years in situations where it works you tend to assume it will work everywhere.
Is missing out on utility bad?
One of the commentators on the original thread can be paraphrased as arguing, "Well, perhaps the agent only wanted a million utility". This misunderstands that the nature of utility. Utility is a measure of things that you want, so it is something you want more of by definition. It may be that there's nothing else you want, so you can't actually receive any more utility, but you always want more utility(edited).
Here's one way around this problem. The original problem assumed that you were trying to optimise for your own utility, but lets pretend now that you are an altruistic agent and that when you name the number, that much utility is created in the world by alleviating some suffering. We can assume in infinite universe so there's infinite suffering to alleviate, but that isn't strictly necessary, as no matter how large a number you name, it is possible that it might turn out that there is more suffering than that in the universe (while still being finite). So let's suppose you name a million, million, million, million, million, million (by its decimal representation of course). The gamemaker then takes you to a planet where the inhabitants are suffering the most brutal torture imaginable by a dictator that is using their planet's advanced neuroscience knowledge to maximise the suffering. The gamemaker tells you that if you had added an extra million on the end, then these people would have had their suffering alleviated. If rationality is winning, does a planet full of tortured people count as winning? Sure, the rules of the game prevent you from completely winning, but nothing in the game stopped you from saving those people. The agent that also saved those people is a more effective agent and hence a more rational agent than you are. Further if you accept that there is no difference between acts and omissions, then there is no moral difference between torturing those people yourself and failing to say the higher number (Actually, I don't really believe this last point. I think this is more a flaw with arguing acts and omissions are the same in the specific case of an unbounded set of options. I wonder if anyone has ever made this argument before, I wouldn't be surprised if it this wasn't the case and if there was a philosophy paper in this).
But this is an unwinnable scenario, so a perfectly rational agent will just pick a number arbitrarily? Sure you don't get the most utility, but why does this matter?
If we say that the only requirement here for an agent to deserve the title of "perfectly rational" is to pick an arbitrarily stopping point, then there's no reason why we can't declare the agent that arbitrarily stops at 999 as "perfectly rational". If I gave an agent the option of picking a utility of 999 or a utility of one million, the agent who picked a utility of 999 would be quite irrational. But suddenly, when given even more options, the agent who only gets 999 utility counts as rational. It actually goes further than this, there's no objective reason that the agent can't just stop at 1. The alternative is that we declare any agent who picks at least a "reasonably large" number to be rational. The problem is that there is no objective definition of "reasonably large". This would create a situation where our definition of "perfectly rational" would be subjective, which is precisely what the idea of perfectly rational was created to avoid. It gets worse than this still. Let's pretend that before the agent plays the game they lose a million utility (and that the first million utility they get from the game goes towards reversing these effects, time travel is possible in this universe). We then get our "perfectly rational" agent a million (minus one) utility in the red, ie. suffering a horrible fate, which they could have easily chosen to avoid. Is it really inconceivable that the agent who gets positive one million utility instead of negative one million could be more rational?
What if this were a real life situation? Would you really go, "meh" and accept the torture because you think a rational agent can pick an arbitrary number and still be perfectly rational?(edit)
The argument that you can't choose infinity, so you can't win anyway, is just a distraction. Suppose perfect rationality didn't exist for a particular scenario, what would this imply about this scenario? The answer is that it would imply that there was no way of conclusively winning, because, if there was, then an agent following this strategy would be perfectly rational for this scenario. Yet, somehow people are trying to twist it around the other way and conclude that it disproves my argument. You can't disprove an argument by proving what it predicts(edit).
What other consequences are there?
The fact that there is no perfectly rational agent for these situations means that any agent will seem to act rather strangely. Let's suppose that a particular agent who plays this game will always stop at a certain number X, say a Googleplex. If we try to sell them this item for more than that, they would refuse as they wouldn't make money on it, despite the fact that they could make money on it if they chose a higher number.
Where this gets interesting is that the agent might have special code to buy the right to play the game for any price P, and then choose the number X+P. It seems that sometimes it is rational to have path-dependent decisions despite the fact that the amount paid doesn't affect the utility gained from choosing a particular number.
Further, with this code, you could buy the right to play the game back off the agent (before it picks the number) for X+P+1. You could then sell it back to the agent for X+P+one billion and repeatedly buy and sell the right to play the game back to the agent. (If the agent knows that you are going to offer to buy the game off it, then it could just simulate the sale by increasing the number it asks for, but it has no reason not to simulate the sale and also accept a higher second offer)
Further, if the agent was running code to choose the number 2X instead, we would end up with a situation where it might be rational for the agent to pay you money to charge it extra for the right to play the game.
Another property is that you can sell the right to play the game to any number of agents, add up all their numbers, and add your profit on top and ask for that much utility.
It seems like the choices for these games obey rather unusual rules. If these choices are allowed to count as "perfectly rational" as per the people who disagree with me that perfect rationality exists, it seems at the very least that perfect rationality is something that behaves very differently from what we might expect.
At the end of the day, I suppose whether you agree with my terminology regarding rationality or not, we can see that there are specific situations where we it seems reasonable to act in a rather strange manner.