## LESSWRONGLW

[anonymous]7y17

I, for one, have "terminal value" for traveling back in time and riding a dinosaur, in the sense that worlds consistent with that event are ranked above most others. Now, of course, the realization of that particular goal is impossible, but possibility is orthogonal to preference.

The fact is, most things are impossible, but there's nothing wrong with having a general preference ordering over a superset of the set of physically possible worlds. Likewise, my probability distributions are over a superset of the actually physically possible outcomes.... (read more)

When all the impossible things get eliminated and we move on like good rationalists, there are still choices to be made, and some things are still better than others. If I have to choose between a universe containing a bilion paperclips and a universe containing a single frozen dinosaur, my preference for ice cream over dirt is irrelevant, but I can still make a choice, and can still have a preference for the dinosaur (or the paperclips, whatever I happen to think is best).

In contrast to this comment's sister comment, I don't think this addresses the qu... (read more)

1[anonymous]7yI agree and think that this part sums up a good response to the above question.

# 17

Let's say Bob's terminal value is to travel back in time and ride a dinosaur.

It is instrumentally rational for Bob to study physics so he can learn how to build a time machine. As he learns more physics, Bob realizes that his terminal value is not only utterly impossible but meaningless. By definition, someone in Bob's past riding a dinosaur is not a future evolution of the present Bob.

There are a number of ways to create the subjective experience of having gone into the past and ridden a dinosaur. But to Bob, it's not the same because he wanted both the subjective experience and the knowledge that it corresponded to objective fact. Without the latter, he might as well have just watched a movie or played a video game.

So if we took the original, innocent-of-physics Bob and somehow calculated his coherent extrapolated volition, we would end up with a Bob who has given up on time travel. The original Bob would not want to be this Bob.

But, how do we know that _anything_ we value won't similarly dissolve under sufficiently thorough deconstruction? Let's suppose for a minute that all "human values" are dangling units; that everything we want is as possible and makes as much sense as wanting to hear the sound of blue or taste the flavor of a prime number. What is the rational course of action in such a situation?

PS: If your response resembles "keep attempting to XXX anyway", please explain what privileges XXX over any number of other alternatives other than your current preference. Are you using some kind of pre-commitment strategy to a subset of your current goals? Do you now wish you had used the same strategy to precommit to goals you had when you were a toddler?