1231

LESSWRONG
LW

1230
AnthropicsSimulation HypothesisWorld Modeling
Frontpage

38

A Stranger Priority? Topics at the Outer Reaches of Effective Altruism (my dissertation)

by Joe Carlsmith
21st Feb 2023
1 min read
16

38

38

A Stranger Priority? Topics at the Outer Reaches of Effective Altruism (my dissertation)
1M. Y. Zuo
1Noosphere89
1M. Y. Zuo
1Noosphere89
1M. Y. Zuo
1Noosphere89
1M. Y. Zuo
1Noosphere89
1M. Y. Zuo
1Noosphere89
1M. Y. Zuo
1Noosphere89
1M. Y. Zuo
2Noosphere89
1TAG
1M. Y. Zuo
New Comment
Email me replies to all comments
16 comments, sorted by
top scoring
Click to highlight new comments since: Today at 2:38 PM
[-]M. Y. Zuo3y10

In regards to simulation arguments, they all have the problem of infinite regress, i.e. every layer of reality could be contained within another as simulation. 

Which seems to lead to an unsatisfactory conclusion. Have you considered a way around it?

Reply
[-]Noosphere893y10

The real issue I think I have with simulation arguments isn't that they're wrong, but rather that intuitions are being ported over from the finite case where infinite regress is a problem, and in the infinite case infinite regress isn't a problem.

Putting it another way, for our purposes it all adds up to normality for the most part.

Reply
[-]M. Y. Zuo3y10

and in the infinite case infinite regress isn't a problem.

Can you elaborate on this?

Reply
[-]Noosphere893y10

Let's say we create an idealized real computer, a computer that has uncountably infinite computing power and uncountably infinite memory via reducing the Planck constant to 0.

This is a hypercomputer, a computer that is more powerful than a Turing machine.

Then we ask it to simulate exactly a civilization that can make this computer, and it simulates another ad infinitum.

The important thing is that while there's an infinite regress here, there's no contradiction logically/mathematically speaking, unlike the finite case, since multiplying or adding infinities together, even infinitely many times, gets you an infinity from the same cardinality, but in the finite case, we have a computer with finite memory being multiplied by an infinite amount of simulations, meaning the computer is infinite, which contradicts our description of the finite computer we had earlier.

This is another way in which infinity doesn't add up to normality or common sense, and messes with our intuitions.

Reply
[-]M. Y. Zuo3y10

Let's say we create an idealized real computer, a computer that has uncountably infinite computing power and uncountably infinite memory via reducing the Planck constant to 0.

This is impossible, in this universe at least. There's a maximum limit to information density per Planck volume. The Bekenstein bound.

Reply
[-]Noosphere893y10

Yep, I know that, and indeed the only reason that we can't make hypercomputers is because of the fact that the Planck constant is not 0.

Reply
[-]M. Y. Zuo3y10

So your proposed scenario is logically impossible. Why then does it matter for any case?

Reply
[-]Noosphere893y10

It isn't logically impossible, which is my point here. It's likely physically impossible to do, but physically impossible is not equivalent to mathematically/logically impossible.

A better example of a logically/mathematically impossible thing to do is doubling the cube using only a straight-edge.

Reply
[-]M. Y. Zuo3y10

The definition of the Planck constant entails it must be non-zero.  A zero constant isn't anything at all. Hence a logical impossibility for any constant to be zero.

Reply
[-]Noosphere893y10

Hm, can you show me where the definition of the physical constants entails it being non-zero?

Reply
[-]M. Y. Zuo3y10

https://en.wikipedia.org/wiki/Physical_constant#:~:text=A%20physical%20constant%2C%20sometimes%20fundamental,have%20constant%20value%20in%20time.

 

This a pretty common formulation. There are many more reference sources that are publicly accessible.

Reply
[-]Noosphere893y10

Alright, I'll concede somewhat. Yes, the constants aren't manipulatable, but nowhere does it show that a constant that just happens to be 0 isn't logically possible, and thus I can reformulate the argument to not need manipulation of the constants.

And this is for a reason: Physics uses real numbers, and since 0 is a real number, it's logically possible for a constant to be at 0.

Also, see vacuum decay for how a constant may change to a new number.

Reply
[-]M. Y. Zuo3y10

And this is for a reason: Physics uses real numbers, and since 0 is a real number, it's logically possible for a constant to be at 0.

What? This doesn't make any sense. 

Physicists definitely use more than just Real numbers.  And all well known physics journals have papers that contains them. You can verify it for yourself.

And even if for some reason that was not the case, what can be considered a constant has more than one requirement.

Logical possibility can also entail multiple prerequisites. 

Reply
[-]Noosphere891y20

In retrospect, I still believe you can get a 0 constant to be logically possible under real, rational, integer and 1 definition of the natural numbers, I was just being too quick here to state, and that's just the number systems I know, and physics almost certainly uses rational and integer numbers constantly, as well as natural numbers.

Reply
[-]TAG3y10

In regards to simulation arguments, they all have the problem of infinite regress, i.e. every layer of reality could be contained within another as simulation

Well, no...you keep having to lose size or resolution or something.

Reply
[-]M. Y. Zuo3y10

Well, no...you keep having to lose size or resolution or something.

Can you elaborate?

Reply
Moderation Log
More from Joe Carlsmith
View more
Curated and popular this week
16Comments
AnthropicsSimulation HypothesisWorld Modeling
Frontpage

(Cross-posted from my website.)

After many years of focusing on other stuff, I recently completed my doctorate in philosophy from the University of Oxford. My dissertation ("A Stranger Priority? Topics at the Outer Reaches of Effective Altruism") was three of my essays -- on anthropic reasoning, simulation arguments, and infinite ethics -- revised, stapled together, and unified under the theme of the "crazy train" as a possible objection to longtermism.

The full text is here. I've also broken the main chapters up into individual PDFs:

  • Chapter 1: SIA vs. SSA
  • Chapter 2: Simulation arguments
  • Chapter 3: Infinite ethics and the utilitarian dream

Chapter 1 and Chapter 3 are pretty similar to the original essays (here and here). Chapter 2, however, has been re-thought and almost entirely re-written -- and I think it's now substantially clearer about the issues at stake.

Since submitting the thesis in fall of 2022, I've thought more about various "crazy train" issues, and my current view is that there's quite a bit more to say in defense of longtermism than the thesis has explored. In particular, I want to highlight a distinction I discuss in the conclusion of the thesis, between what I call "welfare longtermism," which focuses on our impact on the welfare of future people, and what I call "wisdom longtermism," which focuses on reaching a wise and empowered future more broadly. The case for the latter seems to me more robust to various "crazy train" considerations than the case for the former.