1 Introduction

(Crossposted from my blog--the formatting is probably a bit better there.)

With entity, dementedly meant to be infinite

Eminem.

(Crossposted from my blog).  The formatting is better there.  

I never had a Nietzsche phase.

You know the way lots of people get obsessed with Nietzsche for a while? They start wearing black, becoming goth, smoking marijuana, and talking about how like “god is dead, nothing matters, man.” This never happened to me, in part because Nietzsche doesn’t really make arguments, just self-indulgent rambles.

But I’m starting to have an infinite ethics phase. Infinite ethics is trippy—it just seems like everything—especially morality—breaks when you get infinite amounts of stuff. Various plausible views seem to lead to the conclusion that one has no reasons for action or that obvious principles about value, morality, and even rationality collapse.

Lots of people seem to be bothered by population ethics. They find the paradoxes unsettling. I’ve never been troubled by any of population ethics—as long as you just accept what totalism says about population ethics, all the puzzles go away. Then, all the “puzzles” serve as reasons to accept totalism rather than genuine paradoxes. People find the totalist view unintuitive, and get very worked up about views it implies, such that they start giving its implications mean names like the repugnant conclusion and the very repugnant conclusion. However, I find all of these to be very intuitive—and you can also prove that one should accept them from minimal axioms—so I don’t even feel like there’s a conflict, much less a conflict so great that it should deeply unsettle us.

Population ethics doesn’t keep me up at night. Nor does most of philosophy—I feel pretty satisfied in my views on most issues. I’m probably wrong about lots of things, but there are no issues where I feel as if I’m required to, in Spencer Case’s language, repeatedly embrace the cactus—accept crazy, deeply implausible conclusions over and over again.

Infinite ethics, however, requires me to repeatedly embrace the cactus.

Reflecting on ethics makes a whole host of principles seem obvious. Infinite ethics shows how some of the most obvious principles—things so obvious that if a theory denied them, we’d take that to be enough to refute the theory—come into conflict with each other. Infinite ethics laughs at your attempts to bring your views into reflective equilibrium. It mocks your attempts to hold onto principles like that an infinitely large universe, where there are 100 quadrillion miserable people per galaxy and only one happy person is bad. If you want to read more about the infinite, read Carlsmith’s excellent report as well as Askell’s Ph.D thesis. In fact, Carlsmith’s report is so good that multiple times when writing this article, I’ve worried that I’m just copying Carlsmith—he covers all the relevant points. He also has an excellent Substack—very worth checking out.

One piece of good news though—we’ll cover this more at the end—but infinite ethics gives a reason for optimism. Infinite ethics gives us the best news in the universe.

2 A Misconception

In search of some rest, in search of a break
From a life of tests where something's always at stake
Where something's always so far
What about my broken car?
What about my life so far?
What about my dream?
What about?

What about everything?

Carbon Leaf

I can hear the non-utilitarians cackling right now. “Ah yes, infinity is a problem—if you’re a utilitarian and think we can sum everything up—but if we’re not utilitarians, the problems vanish,” seems to be a prevailing sentiment. Utilitarians seem to be the primary ones who do, in fact, worry about infinity. But that’s because utilitarianism easily gives answers to the other ethical dilemmas—people who want to work on moral dilemmas who are utilitarians have no finite cases to work on, so must turn their sights to the infinite.

This sentiment is, however, dead wrong. Infinity is, as Carlsmith says, “everyone’s problem.” As we’ll see, the puzzles stem from various impossibility proofs—where you can show that various super-plausible ethical principles conflict. You do not get out of any of these impossibility proofs by denying utilitarianism. There are problems for everyone.

3 Heaven=hell

So, so you think you can tell
Heaven from hell?
Blue skies from pain?

Pink Floyd

Suppose you discovered an infinite universe, full of infinite galaxies. In the center of every universe was a single happy person. Every other person in the universe was horrifically miserable—all of the time. Call this universe HELL.

Consider another universe called HEAVEN. This universe is the inverse of the other—in the center of every galaxy is one miserable person. The rest of the galaxy is filled with billions of happy people. There are infinite happy people.

The following judgments are plausible:

Contra Pink Floyd, we can tell that HEAVEN is better than HELL.

Moving people around does not improve things if it changes nothing else.

Not only are these plausible—these are the two most obvious judgments in the world. But in an infinite world, they cannot both be true. Why is this? Well, in both HEAVEN and HELL, there are infinite people—both happy and unhappy. Neither has more happy people than unhappy people. Infinity is weird like that—there are as many natural numbers as there are multiples of a billion and as there are odd numbers. There are ways of modifying the mathematics to avoid this crazy result maybe—but that runs head on into the problem posed in section 5. But the orthodox model of infinity says that both HEAVEN and HELL have equal happy and unhappy people. But if that’s true, then you could turn HEAVEN into HELL just by changing the locations of people. HEAVEN is HELL with shifted people.

Now you might think it’s metaphysically impossible to have HEAVEN turn into HELL. Maybe you’re a causal finist or something—so you think there can’t be infinite sequences of events. But that doesn’t solve the puzzle—even if you can’t go from HEAVEN to HELL, HEAVEN just is HELL with people in different locations. As long as these are metaphysically possible worlds, you’ll either have to think that people’s locations matters intrinsically morally or that HEAVEN=HELL. But these are both crazy—both of these are things that I’d take as a reason to reject a view that implied them. But you either have to give up on infinites or reject this altogether.

I’d rather give up the first principle. Clearly, people’s location in space doesn’t matter intrinsically. But even here, there’s another puzzle. Here’s a third plausible principle that conflicts with 2.

If infinite very bad things happen, that is bad.

Seems true. Few things are more obvious than this. If it turns out that I have to sign off on it sometimes not being bad when infinity bad things happen, then it seems something has gone very wrong. But this view implies that. Suppose that there are an infinite number of events, each of which creates one of the hell galaxies. Each of those events would be bad. But when infinity of them happen—each of which are bad, even conditional on the others—they stop being bad??! This seems wrong. Imagine seeing 100 billion bad things happen and weeping, because you were hoping the bad things would continue forever. Isn’t it obvious that creating infinite torture chambers, with just one happy person, is bad? But isn’t it also obvious that moving people around doesn’t improve things?

As I say, infinity breaks stuff.

I look out on the expanse of hellish misery, populated only occasionally by a laughing child, and want it to never end.

4 Askell’s puzzle

Can you tell a green field
From a cold steel rail?
A smile from a veil?
Do you think you can tell?

Did they get you to trade
Your heroes for ghosts?
Hot ashes for trees?
Hot air for a cool breeze?
Cold comfort for change?
Did you exchange
A walk-on part in the war
For a lead role in a cage?

—Pink Floyd, again.

Amanda Askell has a cool impossibility proof in her Ph.D thesis. She gives the following three cases:

1 Archipelago: There are an infinite number of islands each of which contain infinite people. Compare:

Clement: each of the islands has 3 happy people and 1 sad person.

Sun Taking The Absolute Fucking Piss At This Stage – Waterford Whispers News

Stormy: each of the islands has 3 sad people and 1 happy person.

Meet Stormy Daniels, Porn Star Who Allegedly Had an Affair With Trump

It seems that you should bring about Clement instead of Stormy. One way to see this is that replacing one island from Stormy with an island from Clement would be good—but if this desirable replacement happens infinity times, that’s really good.

2 Infinity House: Infinity houses are houses that contain only one person. After 50 years, the person dies and is replaced by someone else. This repeats forever. Compare:

Mansion: The house has two generations of happy people and then one generation of an unhappy person.

Shack: The house has two generations of unhappy people and then one generation of a happy person.

It seems like Mansion is better than Shack. Again, if you cause Shack to be Mansion for a short period of time, that would be good. Doing that again would be good again. But doing that infinity times would then be infinitely good.

3 Cubeland: Cubeland involves infinite agents who live infinitely long. They all live their lives in a cube. These cubes are spread uniformly throughout time and space. You can choose to bring about either:

Optimistic Cubeland: Everyone starts out happy. However, there is an expanding circle that grows to engulf one more cube each year. It turns all the people in each cube it touches unhappy.

or

Pessimistic Cubeland: Everyone starts out sad. However, there is an expanding circle that grows to engulf one more cube each year. It turns all the people in each cube it touches happy.

Askell says that Optimistic Cubeland seems better. After all, at any time there are infinite happy agents and only finite unhappy agents. I think this isn’t obvious and generates a paradox. Yes, there’s a good argument for saying that Optimistic Cubeland is better. But there’s also a good argument for saying that Pessimistic Cubeland is better. Consider the following principle:

Better Lives Superiority: If there are two worlds where in one everyone will have an infinitely bad life and in the other everyone will have an infinitely good life, the first is worse than the second one.

But this conflicts with the judgment that Pessimistic Cubeland is worse. Consider any agent in Pessimistic Cubeland. They are only a finite distance from the expanding happiness-inducing cube. Suppose that they’re 100000000000000000 cubes away from the expanding happiness-inducing cube. Well they can expect to spend 100000000000000000 years being miserable and then infinity years being happy. Seems like a good bargain. But this is the position of everyone—they’ll spend infinity years happy and then finite years unhappy.

Maybe this has convinced you. But here’s another principle:

Better Moments Superiority: If some world contains infinite moments, and each one is infinitely good, it is better than another world that contains infinite moments that are each bad.

Seems true. But it conflicts with Better Lives Superiority. In Optimistic Cubeland, each moment is good, but each life is bad. In Pessimistic Cubeland, each life is good but each moment is bad.

(Unrelated, but whenever I name a principle, it always seems to have a dumb name. How do other people come up with cool-sounding names for principles? Is this something you learn in philosophy grad school? I once even came up with the name “the leveling down objection,” for some objection that was not the real leveling down objection. How does that happen? What are the odds that someone would get to the name leveling down objection before me?)

Askell demonstrates a paradox. The following principles are all plausible:

Pareto: If some world contains all the same people as another world, but some of them are better off in the first world than the second one, then the first is better than the second one.

Transitivity: If A is better than B which is better than C, then A is better than C.

Qualitativeness of Better Than or Equal To: If w3 is qualitatively identical to w1, meaning that in all respects that make them good or bad they are the same, and w4 is qualitatively identical to w2, then w3≥w4 only if w1≥w2. (w denotes worlds).

Permutation Principle: For any world pair, there is another qualitatively identical world pair.

But Askell shows that if we accept each of these principles, we cannot accept any of the earlier judgments about worlds—we must give up the belief that Mansion>Shack, Clement>Stormy, etc. We must think that there’s ubiquitous incomparability between Mansion and Shack, and Clement and Stormy. And for many of these, even if we give them up, they don’t solve the full problem—for example, if we give up transitivity, we get other similar problems. I personally think that accepting ubiquitous incomparability is probably the least costly option—but it has problems of its own. Gustafsson shows a potent money pump for incomparable views, I have another one, and this puzzle is, to my mind, a very decisive argument for incomparability. In addition, even if there’s some incomparability, it seems like Mansion>Shack, for instance—that doesn’t seem like a case of incomparability.

5 Another paradox

And I have so many
Questions
About life, the universe
And everything

Sage Crosby

Consider the following principles:

If some world is the same as another world except it has an extra infinite people who have good lives, it is better.

Increasing the well-being of an infinite number of people makes the world better.

Qualitativeness of Better Than or Equal To: If w3 is qualitatively identical to w1, meaning that in all respects that make them good or bad they are the same, and w4 is qualitatively identical to w2, then w3≥w4 only if w1≥w2. (w denotes worlds).

If two worlds have the same number of people at the same levels of virtue, who all have the same levels of well-being, they are qualitatively identical.

These conflict.

Consider world one which contains 1 person with 1 utility, 1 with 2 utility, 1 with 3 utility, 1 with 4 utility, etc.

World 2 contains 1 person with 4 utility, one with 8, 1 with 12, 1 with 16, etc.

World 3 contains one person with 4 utility, one with 8, one with 12, 1 with 16, etc. However, these people are matched such that each person is the same as one of the people in world 1 who has 1/4 of their well-being. So world 3 contains the person who has well-being of 1 in world 1, but they have well-being level of 4.

World 4 contains 1 person with 1 utility, 1 person with 2 utility, one with 3 utility, 1 with 4 utility, etc. However, the people in world 4 are matched with people in world 2 such that they have identical well-being levels. So world 4 contains all the same people as world 2 plus infinite extra people with good lives.

W4 means world 4, w3 means world 3, etc. Consider:

W4>w2 by principle 1.

W3>w1 by principle 2.

W3 is qualitatively identical to w2 by principle 4.

W4 is qualitatively identical to w1 by principle 4.

By principle 3 w3>w1 only if w2>w4.

Therefore by 5 and 3 w2>w4.

6 and 1 contradict.

But each of these principles seem very plausible. Relatedly, suppose we compare W1 to W2. Which is better? Unclear. There are plausible arguments for why both are better than the other—one contains a subset of the other, the other is the same as the first in terms of utility if you just multiplied everyone’s well-being by 4.

6 The Pasadena puzzle and expected value

The evil, it spread like a fever ahead
It was night when you died, my firefly

Sufjan Stevens

It’s good to make other people’s lives better. But there are puzzles about how to do this under uncertainty. Suppose a person offers to save one person. Seems like a good deal. What if they offer to double the number of people saved with 99.99999% probability. Seems like a good deal again. But if they keep improving the payouts, it seems to keep getting better—until there are absurdly low probabilities of super-high values. But that doesn’t seem better than the beginning—if you’re almost guaranteed not to save any people, that seems worse than being guaranteed to save lots of people.

Beckstead and Thomas, in a paper I’ve talked about, show quite convincingly that everyone will have to say something bizarre about these cases. We either have to accept fanaticism—according to which low probabilities of insane value are better than certainty of lots of value—timidity, according to which you should sometimes pass up arbitrarily large increases in value at the cost of a slight drop in the odds of payouts, or intransitivity, which says that if A is better than B which is better than C, then A is not always better than C. They then show that timidity has pretty absurd theoretical costs—ones that make it almost impossible to accept. I think that fanaticism is the best way to go—I’m not giving up transitivity.

If we’re fanatics, when calculating decisions, we’ll just look at their expected payouts. So look at all the possible outcomes of the action, figure out how good they are, multiply them by their probabilities, and get the value of the action. Notably, Gustafsson shows quite convincingly that any deviation from this way of making decisions leaves one open to paying infinite money to get nothing in return.

So in order to avoid being a fanatic, we’ll have to find some way out of Gustafsson’s money pump arguments and then on top of that, accept either intransitivity or timidity. Timidity is basically out—I don’t know if anyone has ever leveled a view as thoroughly as Beckstead and Thomas leveled timidity. The last option would be accepting intransitivity, but that’s really hard to stomach—transitivity is intuitively plausible and follows from various plausible money pumps and dominance principles. Given these costs, let’s assume you accept fanaticism.

There are various problems for being a fanatic, but the biggest one is probably that there isn’t actually a way to be a fanatic.

Suppose that some action has some probability of resulting in the following state of affairs.

In year 1, a person experiences 1 unit of utility.

In year 2, she experiences 2 units of disutility.

In year 3, she experiences 4 units of utility.

In year 4, she experiences 8 units of disutility.

How should we value this state of affairs? It doesn’t sum to any particular value—so it can’t be included. Any nonzero probability in this state of affairs corrupts expected value reasoning, makes it have no determinate expected value.

Okay, maybe you just decide to ignore states of affairs that have indeterminate expected value. But this solves nothing—each of these years have determinate expected value. The thing that has indeterminate expected value is the combination of the years. In fact, there’s some proof that a bunch of numerical sequences can sum to any number.

And this has problems even in non-far-fetched cases. Every action you take probably has a nonzero probability of causing something with indeterminate expected value. That means that all expected value reasoning is corrupted by this possibility—no action has positive expected value.

If you’re a fanatic—if you don’t discount low probabilities or think value caps out—then you can’t count up actions expected values. But if you’re not, then you have equally big problems.

7 Some solutions, some cactuses that I jump into, some reasons not to rethink your fundamental normative commitments, and some good news

And I'd give up forever to touch you
'Cause I know that you feel me somehow
You're the closest to heaven that I'll ever be
And I don't want to go home right now

And all I can taste is this moment
And all I can breathe is your life
And sooner or later, it's over
I just don't wanna miss you tonight

GooGoo Dolls

Sigh. Here’s where I’ll have to give my solution. Things will get messy. Bullets will be bitten, cactuses will be embraced, and the passive voice will be used.

But first—you might look upon this with despair. If morality is this weird, if so many normative commitments that I believed confidently turn out to be false, does anything really matter? Why not give up on morality? If I have to deny that HEAVEN is better than HELL, then my moral intuitions are too screwed up to track the truth. Moral realism must be false.

I feel the pull of this position. But I think it’s dead wrong.

Yes, infinity breaks morality. But infinity breaks everything. You can’t even really do addition with infinity! And there are all sorts of paradoxes of infinity even ignoring morality that infinity generates—we’ll talk about this more in the next section. Just as infinity shouldn’t cause you to give up belief in the physical world or modal facts, it shouldn’t cause us to give up belief in the other things that infinity breaks.

Infinity has weird, unintuitive properties. It’s no surprise that lots of things break—infinity is just bizarre. And when you start to really grasp how infinity works, it softens some of the cactuses. So here’s my painful, grueling solution to each of the paradoxes.

Even if we accept that ethics breaks in the infinite—that sometimes, there isn’t a good way to add up the value in an infinite universe, or determine if something is better—it’s still overwhelmingly plausible that some things matter. Even if value can’t be aggregated well in an infinite universe, there is still local value. Nothing happening over a billion galaxies away can affect the value of falling in love, for example, right where you are.

Even if ethics breaks in the face of the infinite, helping people is still worthwhile.

The first paradox, discussed in section 3, involved a conflict between

We can tell that HEAVEN is better than HELL.

and

Moving people around does not improve things if it changes nothing else.

HEAVEN was the world populated by infinity galaxies, each of which have a single miserable person and the rest happy people. HELL was the opposite—each galaxy had a single miserable person and the rest happy people.

I’m more confident in 2 than in 1. If you just move people around but don’t make anyone’s life better, that is not good. 1 seems right, but only because it seems like there’s some important sense in which HEAVEN has a larger % of happy people than HELL. But the mathematics of the infinite seems to require rejecting that judgment. On account of this, I’d also have to give up the conclusion that it’s bad when an infinite number of bad things happen—but I guess I’d be willing to give that up. That stops seeming as unintuitive when you really get your head around the infinite.

This is good news. In the next section, we’ll see another way out of this puzzle that implies that the universe is at most finitely bad. But if 1 is false, then contrary to what I previously believed, the universe is not infinitely bad (see Huemer for a defense of this view). That is literally the best possible news. The universe is, at worst, only finitely bad. To think that the universe is infinitely bad, you have to think that moving people around is morally important. But it’s clearly not.

Hallelujah! Infinite ethics dispels the most depressing possible notion according to which the existence of the universe is a tragedy of literally infinite proportions.

The second paradox came from Askell. It gave various super-plausible judgments about various worlds, and showed that if the following principles were true, we had to give up those principles about the worlds:

Pareto: If some world contains all the same people as another world, but some of them are better off in the first world than the second one, then the first is better than the second one.

Transitivity: If A is better than B which is better than C, then A is better than C.

Qualitativeness of Better Than or Equal To: If w3 is qualitatively identical to w1, meaning that in all respects that make them good or bad they are the same, and w4 is qualitatively identical to w2, then w3≥w4 only if w1≥w2. (w denotes worlds).

Permutation Principle: For any world pair, there is another qualitatively identical world pair.

I don’t actually find this one that troubling given the others. I’m happy giving up 3, 4, or the conclusion that there’s no ubiquitous incomparability. It seems plausible that infinities with different populations are incomparable—so there is not, contrary to the Permutation Principle, another qualitatively identical world. In an infinite world, to keep Pareto, we have to say that people’s identities matter and that by changing that, you can’t get a qualitatively identical world. Qualitativeness of Better Than or Equal To seems very plausible, but in an infinite world, it doesn’t seem too hard to give up. And I don’t really mind accepting ubiquitous incomparability—especially because the considerations raised in response to the first of the paradoxes should push us towards thinking that many of these worlds are incomparable.

The third paradox showed a conflict between the following principles.

If some world is the same as another world except it has an extra infinite people who have good lives, it is better.

Increasing the well-being of an infinite number of people makes the world better.

Qualitativeness of Better Than or Equal To: If w3 is qualitatively identical to w1, meaning that in all respects that make them good or bad they are the same, and w4 is qualitatively identical to w2, then w3≥w4 only if w1≥w2. (w denotes worlds).

If two worlds have the same number of people at the same levels of virtue, who all have the same levels of well-being, they are qualitatively identical.

I’d give up 3 or 4 before the others. Identity might matter to ethics—at least, in an infinite universe. Chappell’s paper on value receptacles explains why—we care about things that are good because they’re good for particular people. If you can’t compare infinite worlds with different populations, you can still make judgments about improving every actually existing person’s life.

The final paradox shows that if we’re fanatics we have a problem with finite cases—if we’re not, we have a huge problem with infinite ethics corrupting all expected value reasoning and making everything have undefined expected value. There are creative ways to rank lotteries that probably solve this—Wilkinson has a proposal, for instance. I don’t know if any of these work, but I’m optimistic that there’s at least one successful proposal, perhaps involving complicated math that I’m too mathematically impaired to understand (I almost got a C in calculus). There isn’t an impossibility proof here for ways to rank chance events, so I’m optimistic that there’s a solution.

8 Against the infinite, and against against the infinite

Cause you said forever, now I drive alone past your street

Olivia Rodrigo

Infinity breaks things. It certainly breaks ethics. Literally every conclusion in ethics that I find counterintuitive that I have to accept stems from the infinite. So why believe in it? Can’t we be more convinced both that HEAVEN>HELL and that moving people around doesn’t make the world better than anything about the metaphysics of infinity? If infinity requires we accept bizarre, crazy things over and over again, shouldn’t we just say that you can’t have infinite stuff? Infinity is fine when it’s limited to mathematics and abstract, but nothing concrete can be infinite.

And lots of people seem to find this intuitive. People like to say things like “infinity isn’t a number, it’s a concept.” I don’t really know what this means—every word refers to some concept—but it seems like that’s at least sort of on board with thinking you can’t have infinite stuff, right?

And it’s not just ethics that infinity breaks. Consider the following puzzles:

There is an infinitely large space. Each meter is filled with a grim reaper. Each will execute John if the one to the left of them is not planning on executing him. Will John be executed? Well, none of them can execute him, because there’s always one to their left that will execute him if they haven’t been executed, but that means that each would have to execute him. Thus, John would paradoxically be executed both by all and none of them.

There are various infinites where the utility generated depends on how you arrange things. For example, suppose we’re comparing two worlds, one with a utility of 1.5 then 2 then 3 then 4 then 5, etc, through infinity, versus one with a utility of 1, 2, 3, 4…through infinity. Which is better? Well, here’s one way to arrange them. You can pair them so that the sum of the firsts is always bigger by .5, because 1.5 goes with 1, 2 goes with 2, 3 goes with 3, etc—or you could pair them so the second one is bigger—the first starts off with a lead of 1.5 but then you match 1 to 2, 2 to 3, 3 to 4, etc, and so it ends up infinitely far in the hole comparatively. You can just say that they’re both equal, which is what the mathematicians say, but then making people better off isn’t good from the standpoint of utility.

You live an infinitely long time. You are instrumentally rational and trying to maximize money. Someone offers you a deal—you’ll get 1 dollar on day 1, -2 on day 2, 4 on day 3, etc. How should you value that deal?

There’s a lamp that alternates between switching on and off. After 1 second it switches on, after half a second it switches off, then after a quarter it’s on again, etc. After 2 seconds, is it on or off?

The world has an infinite amount of iron. You create some new iron. Is there more iron than before? It seems like the proponents of infinity say that there is not, but it seems like creating iron results in there being more iron.

The following three principles are each plausible in an infinite world but inconsistent (i) the world had infinite value before and after the holocaust (ii) the holocaust was bad (iii) bad things make the world worse.

There is a vase that is infinitely large and balls labeled 1 through infinity. After half a second, balls 1 through 10 are put into the jar and then ball 1 is removed from the jar. After a quarter of a second, balls 11-20 are put into the jar and then ball 2 is removed from the jar. After an eighth of a second, balls 21-30 are put into the jar and ball 3 is removed from the jar, etc. After 1 second, how many balls are in the jar? One would think infinity, because each of the infinite steps increases the number of balls in the jar, but there is no particular ball that could be in the jar, because balls 1 through infinity would all have been removed. So a jar containing no particular balls would somehow have infinite balls.

Here’s a plausible principle: things are bigger than only part of themselves. A skyscraper, for example, is bigger than only part of a skyscraper. However, if the infinite exists, this is false—the number of natural numbers is the same as the number of natural numbers more than 10, even though one is a subset of the other. This is not intuitively obvious when it comes to mathematical facts—but it is when it comes to actually existing concrete things.

Here’s a plausible principle: merely rearranging people in a hotel cannot add up more rooms. But imagine a hotel containing an infinite number of people—one in each room. One more person wants to enter the hotel. You can open up more space just by moving people—because each person has a room next to them that would be vacant. In fact, if you move the first one to the first prime, the second to the second prime, etc, you can open up infinite rooms, because there are infinite primes and infinite composites. Now, people will point out that this can be coherently mathematically described, but so can a lot of things that are impossible such as a world containing accurate paraconsistent logic.

The following principle is plausible: if there is some gamble that in expectation gives you more payouts than another gamble, then it is a better gamble. However, this is inconsistent with the proposition that gambles can’t be better than themselves. Suppose that someone gives you the Saint Petersburg gamble, which gives you a 1/2 chance of 2 utility, a 1/4 chance of 4 utility, a 1/8 chance of 8 utility, etc. That is infinitely valuable (1/2 x 2 + 1/4 x 4 + 1/8 x 8…=infinity). But there’s a zero percent chance it gives you infinite payouts. Therefore, in expectation, it gives you more stuff than any possible outcome of the gamble, which by the earlier principle would mean it’s better than itself.

We have literally dozens of plausible principles that conflict with belief in the possibility of the infinite. Even when there are things that seem plausible at first—when they start requiring that you deny dozens of plausible principles, including that heaven is better than hell, then you should give them up. And intuitively, it’s not even that obvious that you can have infinite stuff—I don’t really have any intuitions in that vicinity.

Alternatively, maybe we can believe in the infinite, but think that it doesn’t have a lot of the weird properties that mathematicians say it has. Maybe infinity + 1 is greater than infinity. I’m told that there are some versions of infinity like this involving hyperreal numbers and surreal numbers. I don’t know what any of those things are, but if they get me out of the paradoxes of the infinite, I’ll take them!

I’m sympathetic to this solution that involves rejecting the infinite or changing our account of it. I probably have above 50% credence in it. That said though, it’s very costly theoretically.

Let’s start with changing the math of the infinite. This has the problem that lots of the infinites surveyed don’t seem to have a good way to be said to be bigger than others. Is 1+2+3+4+5… greater than 4+8+12+16? One is a subset of the other, the other is the first multiplied by 4. I don’t think that any of these versions of mathematics are able to save all the judgments I appeal to, but I could be wrong—I really don’t know much about the mathematics of the infinite.

The better solution is saying that there can’t be infinite amounts of stuff. But I think there are some big problems with accepting this. I think this is less costly than accepting the possibility of the infinite, but there are big bullets to be bitten either way.

It seems like space is infinitely divisible. This just seems intuitive—how could there be a smallest unit of space. What if you took half of it? That seems possible. In addition, physics seems to suggest that space is infinitely divisible—though it’s not required. The same considerations seem to imply that time is infinitely divisible.

It seems like space and time can both be infinite. Why would the future be necessarily finite? Does Saint Peter have to blink everyone out of existence after 1 billion years, because the party can’t last forever? That seems weird. And what about space? Could there really be an edge of space? That seems inconceivable—what happens if you bump up against the edge?

I think these are both potent considerations. But they’re not totally decisive. In response to the first set of considerations, you can just accept that there can be infinite numbers of things but not infinite amounts of things. So you can cut up space as much as you want, but there can’t be infinite amounts of space. In addition, the evidence from physics is thin and speculative, and it doesn’t seem obvious intuitively that space is infinitely divisible. If space emerges from something else, for example, there might be a smallest unit of space. In addition, perhaps things like Zeno’s paradox show that spacetime can’t be infinitely divisible?

As for the second pair of considerations, the one about time isn’t super forceful. I’m sympathetic to the B theory of time, according to which all moments exist simultaneously on a 4-dimensional block. If true, then a limit to the future and past is no weirder than a limit to space. It seems like space can be limited—if you go against the edge, you can’t move anymore because there just isn’t any more to reality beyond that part of space. That seems possible to me. Also, perhaps space doesn’t have to be limited because it just curves back in on itself.

One might object that this is arbitrary. Why is it that you can’t have infinite stuff? This is like saying you can’t have five objects because thinking you can produces paradoxes. But there are some numbers that don’t describe amounts of things you can have—for example i, which is the square root of negative 1. You can’t have i apples. Infinity can be like that—it’s not an amount of stuff that you can have.

But surely there are infinite numbers? And also infinite possible worlds? If that’s true, if infinity can describe the number of numbers, why can’t it apply to concrete things? Maybe this is just primitive—there’s no deeper explanation—but that’s not very satisfactory. You can’t have i apples, but you also can’t have i abstract objects. Maybe you could work out a theory of infinity that naturally explains this, but it’s not so clear.

This doesn’t totally solve all the paradoxes. In section 6, we saw that problems arise if you assign non-zero probability to the infinite. But if we can work out a nice ethics of all possible scenarios, and there are just impossible scenarios that we have non-zero credence in that break it, this doesn’t seem as troubling. It’s like being troubled by the impossibility of taking the expected value of an action that has some probability of giving someone i utils or that produces a contradictory number of utils—maybe it breaks, but it seems like you could avoid this with a good theory of moral uncertainty or something. If the scenarios where ethics breaks can’t arise because they’re impossible, while there might still be some puzzles, ethics will have escaped the feral talons of the infinite.

9 Takeaway and TLDR

The infinite is weird. It breaks things. It requires we give up on lots of plausible principles. There are basically two solutions.

Bite a bunch of bullets. This is unfortunate and theoretically costly, but infinity is weird, so it’s not the end of the world.

Reject the infinite. That way, you just have to accept weird implications about kinds of infinity that seem possible but are, on this account, impossible.

I lean towards the second option. But if you go with the first option, then, while this will be costly, it is not a sizeable enough cost to give up on ethics. This exposes the weirdness of the infinite, not a problem with ethics. In addition, it’s not hard to vindicate action in an ordinary universe—just accept that if actions have finite consequences, you add up the utility of the things they affect.

Some people try to rescue the infinite by having rankings of worlds that are sensitive to spatiotemporal location—see Wilkinson, for instance. But this doesn’t address the possibility of conscious agents that lack a spatial location and as such, it is not a workable solution.

Infinite ethics is everyone’s problem. Fortunately though, there are solutions that range from terrible to not great. And if infinite ethics requires that we choose between giving up the infinite and giving up ethics, I’d give up the infinite, in a heartbeat. And it doesn’t come to that—worst case scenario, we give up some plausible principles and then keep trying to do good. Because it’s way more obvious that you should do things to help tortured farm animals or children dying of malaria than anything about the infinite.

New to LessWrong?

New Comment
25 comments, sorted by Click to highlight new comments since: Today at 9:48 PM
[-]dr_s8mo30

Oh great, ethics and set theory together, what could possibly go wrong.

Ahem.

OK, so personally I reject totalism as nonsense to begin with so I get to laugh at your problems from my "of course HEAVEN is better than HELL if you stop trying to just sum things" pedestal. But if I accepted totalism, I wouldn't really see a problem with saying that HEAVEN and HELL are equivalent. They just are, in that sense. If a subtler answer exists, it probably rests in some deep mathematics that I do not know how to use. But as things stand, the only unsettling thing is that you generalised a finite problem by making it infinite and it feels like the answer changed, but IMO that's underestimating just how deeply the problem changed too. You want to make a true HEAVEN, you need an uncountable infinity of happy people. Make one happy person for every possible set of the (infinite) unhappy ones.

You don't have to be a totalist to think that HEAVEN>HELL.  The problem is that it's intuitively obvious that if we discovered that every galaxy in the universe was filled with almost all happy people, that would seem better than if they were filled almost exclusively with miserable people. 

[-]dr_s8mo20

No, it's the opposite, you have to be a totalist to even doubt it. For example, if there are more happy than unhappy people, obviously the average is positive. It's only if you compare raw summed totals that you get "oops they're both infinities, can't tell the difference" confusion.

You have not understood the problem.  There are not more happy people than unhappy people in any rigorous sense--the infinities are of the same cardinality.  And the pasadena game scenario gives indeterminate averages.  Also, average utilitarianism is crazy, and implies you should create lots of miserable people in hell as long as they're slightly less miserable than existing people. 

[-]dr_s8mo20

I'm not an average utilitarian either - I don't think it's easy to define a good utility function at all, and I wrote a whole post to jokingly talk of this problem. My point was that only totalists would encounter this specific issue. If the galaxy has 1 trillion people, of which only one is unhappy, you can easily get the average for a single galaxy, which is finite. And since all galaxies have the same average, it can't really change if you just take more of them, no? Even numbers have the same cardinality as natural numbers, but we can still say that the density of even numbers on the natural numbers line is . This is not a Pasadena scenario, this is just a regular old limit of the ratio of two linear functions. Average utilitarianism has other issues, but on this, it captures our intuition exactly right.

You can also get the total of a single galaxy--the problem is how you count up things in an infinite world. 

[-]dr_s8mo20

Yes but the total accumulates, the average does not.

If you rearrange heaven to hell, you get a different average.  So you either have to think rearrangement matters or that they're equal.  

[-]dr_s8mo20

No, you don't. This is like saying that if you rearrange the even numbers, they stop being roughly half of all naturals. They're still one every two. If you pick a large enough ensemble, you notice that. The arrangement with one unhappy person per galaxy is very convenient, but it's the other way around - if the arrangement was inconvenient but the ratio was given, we could group them this way to make the calculation simpler. Relevant concept: Natural Density.

There are as many even numbers as there are total numbers.  They are the same cardinality.  

[-]dr_s8mo20

Yes, but the natural density of even numbers is . And that is the natural extension to infinity of your intuition that there are more happy than unhappy people in the HEAVEN universe.

If you were born as a person at random in HEAVEN, you'd be most likely happy!

https://en.wikipedia.org/wiki/Measure_(mathematics) -- you can have infinities without heaven equal to hell.

What bridges Nietzsche and infinite ethics is the idea of eternal return. Nietzsche was the first infinite ethicist. 

More generally, infinite ethics implies big world immortality and most of timelines where I am immortal are those where I am supported by some advance AI. Now the question is will it be friendly or hostile s-risks creating entity. 

Thus the difference between hell and paradise in infinite ethics is one bit in AI's value function.

For me, the timelines where I am immortal are where I am supported by God.

In some Hegelian sense, Superintelligence is God which self-evolves from matter. 

That's a pretty interesting point.

Also these discussions usually seem to break down around the definition of 'God', because of monotheism a lot of folks think 'God' = omnipotence. 

But that's logically impossible, the maximum possibility is near-omnipotence, i.e. Superintelligence as you put it.

We could push analogy even farther: 

Mathematical universe is God father. 

Artificial intelligence is its son, as an agent built on the same computational principles.

But we should be careful with such analogies.

You know the way lots of people get obsessed with Nietzsche for a while? They start wearing black, becoming goth, smoking marijuana, and talking about how like “god is dead, nothing matters, man.” This never happened to me, in part because Nietzsche doesn’t really make arguments, just self-indulgent rambles.

This is objectionable is many ways. To say that one of the most influential German philosophers produced only self-indulgent rambles is a sufficiently outrageous claim that you should be required to provide an argument in its favor.

I don't even disagree entirely. I view Nietzsche as more of a skilled essay-writer than a philosopher, who tried to appeal more to aesthetics than reason alone, but reducing Nietzsche to a sort-of 19th century "influencer"-type is ridiculous.

I have an argument for a way in which infinity can be used but which doesn't imply any of the negative conclusions. I'm not convinced of its reasonableness or correctness though.

I propose that infinity ethics should only be reasoned about by use of proof through induction. When done this way, the only way to reason about HEAVEN and HELL is by matching up galaxies in each universe, and doing induction across all of the elements:

Theorem: The universe HEAVEN that contains n galaxies is a better universe than HELL which contains n galaxies. We will formalize this as HEAVEN(n) > HELL(n). We will prove this by induction.

  • Base case, HEAVEN(1) > HELL(1): 
    • The first galaxy in HEAVEN (which contains billions of happy people and one miserable person) is better than the first galaxy in HELL (which contains billions of miserable people and one happy person), by our understanding of morality.  
  • Induction step HEAVEN(n) > HELL(n) => HEAVEN(n+1) > HELL(n+1):
    • HEAVEN(n) > HELL(n) (given)
      HEAVEN(n) + billions of happy people + 1 happy person > HELL(n) + billions of miserable people + 1 miserable person (by understanding of morality)
      HEAVEN(n) + billions of happy people + 1 miserable person > HELL(n) + billions of miserable people + 1 happy person (moving people around does not improve things if it changes nothing else.)
      HEAVEN(n + 1) > HELL(n + 1) □

A downside of this approach is that you lose the ability to reason about uncountably infinite numbers. However, I think that's a bullet that I am willing to bite, to only be able to reason about a countably infinite number of moral entities.

That implies that order matters!  If you rearange heaven, you get hell.   There are other problems with ordering--some series can sum to any number depending on arrangement,. 

I don’t think that it does? There are infinitely many arrangements, but the same proof by induction applies to any possible arrangement.

Wait, do you agree that rearranged heaven gets hell?  If so, you either have to deny that HEAVEN>HELL or that arrangement matters.  

You're assuming we're comparing them by galaxies.  But there's no natural way to individuate that explains why we should do that.  

I’m claiming that we should only ever reason about infinity by induction-type proofs. Due to the structure of the thought experiment, the only thing that is possible to use for to count in this way is galaxies, so (I claim) counting galaxies is the only thing that you’re allowed to use for moral reasoning. Since all of the galaxies in each universe are moral equivalents (either all happy but one or all miserable but one), how you rearrange galaxies doesn’t affect the outcome.

(To be clear, I agree that if you rearrange people under the concepts of infinity that mathematicians like to use, you can turn HEAVEN into HELL, but I’m claiming that we’re simply not allowed to use that type of infinity logic for ethics.)

Obviously this is taking a stance about the ways in which infinity can be used in ethics, but I think this is a reasonable way to do so without giving up the concept of infinity entirely.

Why is the only thing that we can use galaxies?  We can compare people in any ways.  

If you rearrange people, standard mathematics says that you can turn HEAVEN into HELL.  Infinity/1 billion = infinity.  You have to change the math of infinity, not just the math of ethics where you add up infinity.