entirelyuseless2
entirelyuseless2 has not written any posts yet.

entirelyuseless2 has not written any posts yet.

Part of the problem with the usual LW position on this is that it is based on two mistakes:
1) Eliezer's mistaken idea that good and evil are arbitary in themselves, and therefore to be judged by human preference alone.
2) Eliezer's excessive personal preference for life (e.g. his claim that he expects that his extrapolated preference would accept the lifespan dilemma deal, even though such acceptance guarantees instant death.)
These two things lead him to judge the matter by his excessive personal preference for life, and therefore to draw the erroneous conclusion that living forever is important.
Good and evil are not arbitrary, and have something to do with what is and what can be.... (read more)
This theory is mostly true, but rather than being cynical about people caring about poor people, we should be cynical about the more general concept of people caring about stuff.
This is all basically right.
However, as I said in a recent comment, people do not actually have utility functions. So in that sense, they have neither a bounded nor an unbounded utility function. They can only try to make their preferences less inconsistent. And you have two options: you can pick some crazy consistency very different from normal, or you can try to increase normality at the same time as increasing consistency. The second choice is better. And in this case, the second choice means picking a bounded utility function, and the first choice means choosing an unbounded one, and going insane (because agreeing to be mugged is insane.)
You don't have any such basic list.
I don't think you understood the argument. Let's agree that an electron prefers what it is going to do, over what it is not going to do. But does an electron in China prefer that I write this comment, or a different one?
Obviously, it has no preference at all about that. So even if it has some local preferences, it does not have a coherent preference over all possible things. The same thing is true for human beings, for exactly the same reasons.
I don't know why you think I am assuming this. Regardless of the causes of your opinions, one thing which is not the cause is a coherent set of probabilities. In the same way, regardless of the causes of your actions, one thing which is not the cause is a coherent set of preferences.
This is necessarily true since you are built out of physical things which do not have sets of preferences about the world, and you follow physical laws which do not have sets of preferences about the world. They have something similar to this, e.g. you could metaphorically speak as if gravity has a preference for things being lower down... (read more)
" It's a list of all our desires and preferences, in order of importance, for every situation ."
This is basically an assertion that we actually have a utility function. This is false. There might be a list of pairings between "situations you might be in" and "things you would do," but it does not correspond to any coherent set of preferences. It corresponds to someone sometimes preferring A to B, and sometimes B to A, without a coherent reason for this.
Asserting that there is such a coherent list would be like asserting that you have a list of probabilities for all statements that are based on a coherent prior and were coherently... (read more)
I predict the video was faked (i.e. that everyone in it knows what is happening and that in fact there was not even a test like this.)
Most people, most of the time, state their beliefs as binary propositions, not as probability statements. Furthermore, this is not just leaving out an actually existing detail, but it is a detail missing from reality. If I say, "That man is about 6 feet tall," you can argue that he has an objectively precise height of 6 feet 2 inches or whatever. But if I say "the sky is blue," it is false that there is an objectively precise probability that I have for that statement. If you push me, I might come up with the number. But I am basically making the number up: it is not something that exists like... (read more)
I think this post is basically correct. You don't, however, give an argument that most minds would behave this way. However, here is a brief intuitive argument for it. A "utility function" does not mean something that is maximized in the ordinary sense of maximize; it just means "what the thing does in all situations." Look at computers: what do they do? In most situations, they sit there and compute things, and do not attempt to do anything in particular in the world. If you scale up their intelligence, that will not necessarily change their utility function much. In other words, it will lead to computers that mostly sit there and compute,... (read more)