Suppose you're a human, and you think we're onto something with modern physics. Humans have a brain that does not exceed a meter in radius. Humans have a brain that weighs less than 100kgs. By the Bekenstein bound, we know the average human brain could have at most 2.6*10^42 bits before forming a black hole, and for the upper bounds I gave, that number is larger, but note, it is still finite.

Now let's say you want to make decisions. What does it mean to make decisions? That's tricky, but we can say it has something to do with performing actions based on sense-perceptions. Let's assume that you are fully aware of your entire brain and its information contents as your sense-perception, and you can completely modify its information contents as your action space. We now have a finite action space and finite perception space, which allows for a finite decision space wherein we define a function that tells you the best choice in action space from every element in perception space. This function can be given by a finite length program.

All a useful decision theory needs is to tell you what one of the best actions for a given perception is. Maybe it's not always unique, but there is no need to even assign utility functions to consider what the best action is. All you need is to form a partial order for your preferences over the collection, so let's only talk about preferences (which could be given by utility functions, but don't have to be).

"Now, now!" You say. "What about potentially larger actors, such as actors that I could modify myself into?" Well, first of all, you still have a finite decision space and you would have to be choosing between what actors to become. Just because you can think about an infinite path of actors that you could transform into, step after step, and perhaps achieve something increasingly large doesn't mean that there is a useful corresponding decision theory for you now.

We can also use Bremermann's Limit to show that even if you somehow managed to survive for the entire expected length of the universe (as our very useful thermodynamic models give) you would still have a finite amount of computation that you can perform before taking that action. This means you can at best make a finite-accuracy prediction of possible futures. At some fidelity, your ability to predict infinite processes accurately fails, and you weigh the possible futures that you predict from your finite time and finite sense-perceptions into a choice in finite action space.

New to LessWrong?

New Comment
9 comments, sorted by Click to highlight new comments since: Today at 9:37 AM

If your argument is that "finite and bounded humans cannot coherently think about infinities", then most of the math is a counterexample. We can "predict" what an infinitely long convergent series will converge to, for example. Or that the position/momentum uncertainty relation implies infinite-dimensional Hilbert space. A lot of reasoning about infinities do not require infinite computation. If you are saying that "infinite ethics" is not in this class, you have to motivate your argument rather precisely.

I don't accept "math" as a proper counterexample. Humans doing math aren't always correct, how do you reason about when math is correct?

My argument is less about "finite humans cannot think about infinities perfectly accurately" and more, "your belief that humans can think about infinities at all is predicated upon the assumption (which can only be taken on faith) that the symbol you manipulate relates to reality and its infinities at all."

how do you reason about when math is correct?

Not sure what you mean by this...

"your belief that humans can think about infinities at all is predicated upon the assumption (which can only be taken on faith) that the symbol you manipulate relates to reality and its infinities at all."

Seems like one of those fully general counter arguments about the limits of knowledge. "your belief that humans can think about [X] at all is predicated upon the assumption (which can only be taken on faith) that the symbol you manipulate relates to reality and its [X] at all."

It is almost a fully general counter argument. It argues against all knowledge, but to different degrees. You can at least compare the references of symbols to finite calculations that you have already done within your own head, and then use Occam's Razor.

I don't think this makes infinite ethics incoherent. Sure, you have to reason finitely and make a finite decision, but that reasoning can still be about infinite quantities, just like it can be about finite systems larger than your brain.

By what means are you coming to your reasoning about infinite quantities? How do you know the quantities you are operating on are infinite at all?

I could come to believe that the simplest explanation for my sensory experiences is a world-model which implies that my actions can have consequences affecting an infinitely large future. Analogous to how in the real world I have come to believe that my actions can have consequences affecting an extremely large future volume of space-time. In neither case do you have certain knowledge that your actions will actually have the purported effects, but that can be the favored hypothesis given your (finite) sensory data.

How would you come to distinguish between an infinitely large future and just a very very large future, given that all your experiences are finite (and pretty limited at that).

The model implying an infinite future could be favored by the evidence. This is the way we currently make predictions about cosmological events that are very far in the future: we can't directly observe the events(or fit them all in our brain even if we could observe them) but we can get evidence for models that imply things about them.