A Stranger Priority? Topics at the Outer Reaches of Effective Altruism (my dissertation)

1M. Y. Zuo

1Noosphere89

1M. Y. Zuo

1Noosphere89

1M. Y. Zuo

1Noosphere89

1M. Y. Zuo

1Noosphere89

1M. Y. Zuo

1Noosphere89

1M. Y. Zuo

1Noosphere89

1M. Y. Zuo

1TAG

1M. Y. Zuo

New Comment

In regards to simulation arguments, they all have the problem of infinite regress, i.e. every layer of reality could be contained within another as simulation.

Which seems to lead to an unsatisfactory conclusion. Have you considered a way around it?

The real issue I think I have with simulation arguments isn't that they're wrong, but rather that intuitions are being ported over from the finite case where infinite regress is a problem, and in the infinite case infinite regress isn't a problem.

Putting it another way, for our purposes it all adds up to normality for the most part.

Let's say we create an idealized real computer, a computer that has uncountably infinite computing power and uncountably infinite memory via reducing the Planck constant to 0.

This is a hypercomputer, a computer that is more powerful than a Turing machine.

Then we ask it to simulate exactly a civilization that can make this computer, and it simulates another ad infinitum.

The important thing is that while there's an infinite regress here, there's no contradiction logically/mathematically speaking, unlike the finite case, since multiplying or adding infinities together, even infinitely many times, gets you an infinity from the same cardinality, but in the finite case, we have a computer with finite memory being multiplied by an infinite amount of simulations, meaning the computer is infinite, which contradicts our description of the finite computer we had earlier.

This is another way in which infinity doesn't add up to normality or common sense, and messes with our intuitions.

Let's say we create an idealized real computer, a computer that has uncountably infinite computing power and uncountably infinite memory via reducing the Planck constant to 0.

This is impossible, in this universe at least. There's a maximum limit to information density per Planck volume. The Bekenstein bound.

Yep, I know that, and indeed the only reason that we can't make hypercomputers is because of the fact that the Planck constant is not 0.

It isn't logically impossible, which is my point here. It's likely physically impossible to do, but physically impossible is not equivalent to mathematically/logically impossible.

A better example of a logically/mathematically impossible thing to do is doubling the cube using only a straight-edge.

The definition of the Planck constant entails it must be non-zero. A zero constant isn't anything at all. Hence a logical impossibility for any constant to be zero.

This a pretty common formulation. There are many more reference sources that are publicly accessible.

Alright, I'll concede somewhat. Yes, the constants aren't manipulatable, but nowhere does it show that a constant that just happens to be 0 isn't logically possible, and thus I can reformulate the argument to not need manipulation of the constants.

And this is for a reason: Physics uses real numbers, and since 0 is a real number, it's logically possible for a constant to be at 0.

Also, see vacuum decay for how a constant may change to a new number.

And this is for a reason: Physics uses real numbers, and since 0 is a real number, it's logically possible for a constant to be at 0.

What? This doesn't make any sense.

Physicists definitely use more than just Real numbers. And all well known physics journals have papers that contains them. You can verify it for yourself.

And even if for some reason that was not the case, what can be considered a constant has more than one requirement.

Logical possibility can also entail multiple prerequisites.

In regards to simulation arguments, they all have the problem of infinite regress, i.e. every layer of reality could be contained within another as simulation

Well, no...you keep having to lose size or resolution or something.

(Cross-posted from my website.)

After many years of focusing on other stuff, I recently completed my doctorate in philosophy from the University of Oxford. My dissertation ("A Stranger Priority? Topics at the Outer Reaches of Effective Altruism") was three of my essays -- on anthropic reasoning, simulation arguments, and infinite ethics -- revised, stapled together, and unified under the theme of the "crazy train" as a possible objection to longtermism.

The full text is here. I've also broken the main chapters up into individual PDFs:

Chapter 1 and Chapter 3 are pretty similar to the original essays (here and here). Chapter 2, however, has been re-thought and almost entirely re-written -- and I think it's now substantially clearer about the issues at stake.

Since submitting the thesis in fall of 2022, I've thought more about various "crazy train" issues, and my current view is that there's quite a bit more to say in defense of longtermism than the thesis has explored. In particular, I want to highlight a distinction I discuss in the conclusion of the thesis, between what I call "welfare longtermism," which focuses on our impact on the welfare of future people, and what I call "wisdom longtermism," which focuses on reaching a wise and empowered future more broadly. The case for the latter seems to me more robust to various "crazy train" considerations than the case for the former.