Earlier in the sequence, I presented the claim that humans are evolved to be naturally inclined towards geometric rationality over arithmetic rationality, and that around here, the local memes have moved us too far off this path. In this post, I will elaborate on that claim. I will argue for this claim very abstractly, but I think the case is far from sound, and some of the arguments will be rather weak.

Naturally Occurring Geometric Rationality

When arguing for Kelly betting, I emphasized respecting your epistemic subagents, and not descending into internal politics. There is a completely different, more standard, argument for Kelly betting, which is that given enough time Kelly bettors will almost certainly end up with all the money. It is thus natural to expect natural selection might select for geometric rationality. This hypothesis is backed up by the way people seem naturally risk averse. Further, people have a tendency towards probability matching, as discussed here, which is a bug that seems more natural if people are naturally inclined towards geometric rationality.

I think the hypothesis that evolution favors geometric rationality is probably best considered empirically, and I haven't gone deep into the data. If anyone else wants to dig into this, I would be curious to hear what they find.

This post, however, will focus on local memes that pull us away from geometric rationality. I will discuss four reasons that I think we are pulled away from geometric rationality: the end of the world, astronomical stakes, utility theory, and low probability of success in our main shared endeavor. 

The Endgame

Kelly betting is usually justified by looking at long run outcomes. If you make enough bets, an expected wealth maximizer will almost certainly end up with no money. My fairness orientation to Kelly betting is a nonstandard one.

People around here tend to think the world is ending. If there are only a few rounds of betting left, it is hard to justify optimizing for long run outcomes. 

Similarly, Thompson sampling is about exploration. If you don't have time to learn from new things, you should be in exploit mode, and just do the best thing.

Further, cooperative and fair equilibria are a lot easier to maintain in a repeated game. People actually often defect at the end of a finitely repeated prisoners' dilemma.

You don't actually need the end of everything to make end-of-the-world style arguments against geometric rationality. You can also reach the same conclusions with a causal singularity. If there is an upcoming place/time that is the primary thing that determines how good the future is, we are approaching a causal end of the world, where while the world might not literally be ending, we should direct all our optimization towards this one place/time. If it is coming up soon, maybe we shouldn't waste our time with exploration.

Counterarguments

I think it is important to note that while this might take out one of the justifications for Kelly betting, I think there is good reason to Kelly bet even if there is a single round, which is part of why I emphasized nonstandard reasons in my Kelly betting post.

Also, one should be careful when trying to change the strategy they have practiced both evolutionarily and throughout their life in the last round. You should expect you are better at geometric rationality than you are at arithmetic rationality, if it is the thing you have been naturally doing so far.

Large Caring

Let's say that you want to geometrically maximize the number of paperclips in the quantum multiverse. Equivalently, you are trying to maximize the logarithm of the measure of paperclips. If we approximate to say that you are in one of a very large finite number of branches, we can say you are trying to maximize the logarithm of the total number of paperclips across all branches.

The thing about the logarithm is that it is locally linear; if you are only having an effect on this one branch, then there is a large number of paper clips in other branches, that you have no control over. Thus if you want to maximize the logarithm of the total number of paperclips in the multiverse, you are approximately linearly maximizing the number of paperclips in your own branch.

Thus, you should arithmetically maximize paperclips.

You can apply this argument more locally also. If you think that effective altruism is somehow a thing that has money, and you want to maximize how much money it has, so that it can somehow turn that money into things you care about, you might think that since it already has a bunch of money no matter what you do, you should arithmetically maximize your wealth, and then donate to EA. If you were actually worried about EA running out of money, you would do something different, but since you are only a drop in the bucket, you should optimize arithmetically.

Counterarguments

I think a major thing going on here is that this argument is assuming large caring, and not assuming similarly large control. If you think that in addition to caring about the other branches, you also have control over the other branches, the calculus changes. If the other branches are running algorithms that are similar to yours, then you are partially deciding what happens in those worlds, so you should not think of the number of paperclips in those worlds as fixed. (It seems wasteful to care about things you can't control, so maybe evolved agents will tend to have their caring and control at similar scales.)

I think this counterargument works in the above EA example, as EAs are making quite a few very correlated decisions, and money can be aggregated additively. However, this counterargument does not really work if you think the outcomes in different branches are not very correlated. 

Even if you exist in every branch, and are making the same decision in every branch, you could still end up approximately arithmetically maximizing paperclips, due to the law of large numbers from all the randomness outside your control.

I think the real problem here is that this argument is assuming the arithmetic answer by caring about the total number of paperclips across the multiverse in the first place. With respect to indexical/quantum uncertainty, saying "the total number of paper clips" is assuming an arithmetic expectation. You could imagine caring geometrically (with respect to your quantum uncertainty) about the number of paperclips, and then the Large Caring argument does not work. Perhaps you just have the unlucky property of caring arithmetically about something like paperclips, but I think it is more likely that someone who thinks this is mistaken about what they want because additive things are easier to think about than multiplicative things, so arithmetic rationality is easier to justify than geometric rationality.

Utility

I think one of the biggest things that leads to arithmetic maximization around here is coming from looking at things the way we naturally do, not being able to justify that natural way given our mistaken understanding of rationality, and thus trying to correct our view to be more justifiable. I think the strongest instance of this pattern is utility theory.

Eliezer argues a bunch for utility functions. If you don't have a utility function, then you can get Dutch booked. This is bad.

If you do have a utility function, then you can never strictly prefer a lottery over outcomes to all of the outcomes in that lottery.

Nash bargaining often recommends randomizing between outcomes. Utilitarianism does not recommend randomizing between outcomes.

Thus Nash bargaining is a mistake, and we should be utilitarians.

 Counterarguments

I think utility functions are bad. I already talked about this in the last post. I think that the argument for utility functions is flawed in that it does not take into account updatelessness. The Allais paradox is not (always) a mistake. The only justification that I know of the Allais paradox being a mistake, requires your preferences respecting Bayesian updating. We already know that reflectively stable preferences cannot respect Bayesian updating, because of counterfactual mugging

I basically think that the ideas behind UDT imply that utility functions are not necessary, and we have not propagated this fact. Further, a large amount of our thinking around here is downstream of the utility concept, including in places where it might not be immediately obvious.

(Note that this is not saying having a utility function is bad, it is just saying that having a utility function is not rationally necessary.)

Certain Doom

The last local meme I want to highlight is the belief that we are very likely to fail at our most important project (saving the world). If you think you are going to fail by default, then you want to be risk seeking. You want to play to your outs, which means assuming that you will be lucky, since that is the only way to succeed. Arithmetic rationality tends to be more risk seeking than geometric rationality. 

Counterarguments

If you think everything is doomed, you should try not to mess anything up. If your worldview is right, we probably lose, so our best out is the one where your your worldview is somehow wrong. In that world, we don't want mistaken people to take big unilateral risk-seeking actions.

See also: MIRI's "Death with Dignity" strategy, where Eliezer argues for optimizing for "dignity points," which are fittingly linear in log-odds of survival.

Shut Up and Multiply

I think this post will actually be the end of the geometric rationality sequence. I will probably write some more posts in the near future that are adjacent to geometric rationality, but not central/soon enough to belong in the sequence. I feel like the reactions I have gotten to the sequence have been very positive, and it feels like a lot of people were relieved that I could provide an alternate framework that justifies the way they naturally want to do things. I think this is because people have a natural defense that is correctly protecting them from extreme memes like utilitarianism, and they want to justify their correct resistance with math.

I want to close with some good news and some bad news:

The bad news is that I don't have the math. I have said most of what I have to say here, and it really is not a complete framework. A bunch of these posts were propagandizing, and not drawing attention to the flaws. (Where do the zero points come from?) There is not a unified theory, there is only a reaction to a flawed theory, and a gesture at a place a theory might grow. I mostly just said "Maybe instead of adding, you should multiply" over and over again.

The good news is that you don't need the math. You can just do the right thing anyway.

New to LessWrong?

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 5:11 PM

Random question, tangential to this post in particular (but not the series): should we expect genes to be doing something like geometric rationality in their propagation? When a new gene emerges and starts to spread, even if it greatly increases host fitness on average, its # of copies could easily drop to 0 by chance. So it "should want" to be cautious, like a kelly better, and maximize its growth geometrically rather than arithmetically.

Not sure quite how that logic should cash out though. For one, genes that make their hosts more cautious (reduce fitness variance) should be systematically advantaged by this effect, at least during their early growth phase. More speculatively, to take advantage of this effect optimally, genes should somehow suss out how large their population (# of copies) is and push their host to be risk-taking vs. cautious in a way that's calibrated to that. Which is maybe biologically plausible?

I don't actually know much about population genetics though, and would be curious to hear from anyone who does.

(this one fell out of the formal sequence, I think.)

Thanks, I must have not clicked "submit".

[+][comment deleted]1y1-2