Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Uni00

I'm a Swedish reader. A meetup in Stockholm would be great!

Uni30

The probability that the universe only has finite space is not exactly 1, is it? Much more might exist than our particular Hubble volume, no? What probability do the, say, world's top 100 physicists assign, on average, to the possibiliy that infinitely much matter exists? And on what grounds?

To my understanding, the universe might be so large that everything that could be described with infinitely many characters actually exists. That kind of "TOE" actually passes the Ockham's razor test excellently; if the universe is that large, then it could (in principle) be exhaustively described by a very simple and short computer program, namely one that produces a string consisting of all the integers in order of size: 110111001011101111000... ad infinitum, translated into any wide-spread language using practially any arbitrarily chosen system for translation. Name anything that could exist in any universe of countably infinite size, and it would be fully described, even at infinitely many places, in the string of characters that such a simple computer program would produce.

Why not assign a pretty large probability to the possibility that the universe is that large, since all other known theories about the size of the universe seem to have a harder time with Ockham's razor?

Uni00

So which form of good should altruists, governments, FAIs, and other agencies in the helping people business respect?

Governments should give people what people say they want, rather than giving people what the governments think will make people happier, whenever they can't do both. But this is not because it's intrinsically better for people to get what they want than to get what makes them happier (it isn't), it's because people will resent what they percieve as paternalism in governments and because they won't pay taxes and obey laws in general if they resent their governments. Without taxes and law-abiding citizens, there will not be much happiness in the long run. So, simply for the sake of happiness maximizing, governments should (except, possibly, in some very, very extreme situations) just do what people want.

It's understandable that people want others to respect what they want, rather than wanting others to try to make them happier: even if we are not all experts ourselves on what will make us happier (not all people know about happiness research), we may need to make our own mistakes in order to really come to trust that what people say works works and that what people say doesn't work doesn't work. Also, some of government's alleged benevolent paternalism "for people's own good" (for example Orwellian surveillance in the name of the "war on terror") may even be part of a plan to enslave or otherwise exploit the people. We may know these things subconsciously, and that may explain why some of us are so reluctant to conclude that what we want has no intrinsic value and that pleasure is the only thing that has intrinsical value. The instrumental value of letting people have what they want (rather than paternalistically giving them what some government thinks they need) is so huge, that saying it has "mere" instrumental value feels like neglecting how huge a value it has. However, it doesn't really have intrinsic value, it just feels that way, because we are not accostumed to thinking that something that has only instrumental value can have such a huge instrumental value.

For example, freedom of speech is of huge importance, but not primarily because people want it, but primarily because it provides happiness and prevents too much suffering from happening. If it were the case that freedom of speech didn't provide any happiness and didn't prevent any suffering, but people still eagerly wanted it, there would be no point in letting anybody have freedom of speech. However, this would imply either that being denied freedom of speech in no way caused any form of suffering in people, or that, if it caused suffering, then getting freedom of speech wouldn't relieve any of that suffering. That is a hypothetical scenario so hard to imagine that I think the fact that it is so hard to imagine is the reason why people have difficulties accepting the truth that freedom of speech has merely instrumental value.

Uni00

Nothing which you have written appears to show that it's impossible or even unlikely that people try to get things they want (which sometimes include pleasure, and which sometimes include saving the world), and that successful planning just feels good.

I'm not trying to show that. I agree that people try to get things they want, as long as with "things they want" we mean "things that they are tempted to go for because the thought of going for those things is so pleasurable".

(something about X) --> (I like the thought of X) --> (I take action X), where (I like the thought of X) would seem to be an unnecessary step where the same result would be obtained by eliminating it,

Why would you want to eliminate the pleasure involved in decision processes? Don't you feel pleasure has intrinsic value? If you eliminate pleasure from decision processes, why not eliminate it altogether from life, for the same reasons that made you consider pleasure "unnecessary" in decision processes?

This, I think, is one thing that makes many people so reluctant to accept the idea of human-level and super-human AI: they notice that many advocates of the AI revolution seem to want to ignore the subjective part of being human and seem interested merely in how to give machines the objective abilities of humans (i.e. abilities to manipulate the outer environment rather than "intangibles" like love and happiness). This seems as backward as spending your whole life earning millions of dollars, having no fun doing it, and never doing anything fun or good with the money. For most people, at first at least, the purpose of earning money is to increase pleasure. So should the purpose of building human-level or super-human AI. If you start to think that step two (the pleasure) in decision processes is an unnecessary part of our decision processes and can be omitted, you are thinking like the money-hunter who has lost track of why money is important; by thinking that pleasure may as well be omitted in decision processes, you throw away the whole reason for having any decision processes at all.

It's the second step (of your three steps above) - the step which is always "I like the though of...", i.e. our striving to maximize pleasure - that determines our values and choices about whatever there is in the first step ("X" or "something about X", the thing we happen to like the thought of). So, to the extent that the first step ("something about X") is incompatible with pleasure-maximizing (the decisive second step), what happens in step two seems to be a misinterpretation of what is there in step one. It seems reasonable to get rid of any misinterpretation. For example: fast food tastes good and produces short-term pleasure, but that pleasure is a misinterpretation in that it makes our organism take fast food for something more nutritious and long-term good for us than it actually is. We should go for pleasure, but not necessarily by eating fast food. We should let ourselves be motivated by the phenomenon in "step two" ("I like the thought of..."), but we should be careful about which "step one"'s ("X" or "something about X") we let "step two" lead us to decisions about. The pleasure derived from eating fast food is, in and of itself, intrinsically good (all other things equal), but the source of it: fast food, is not. Step two is always a good thing as long as step one is a good thing, but step one is sometimes not a good thing even when step two, in and of itself, is a good thing. Whether the goal is to get to step three or to just enjoy the happiness in step two, step one is dispensable and replaceable, whereas step two is always necessary. So, it seems reasonable to found all ethics exclusively on what happens in step two.

Or you can suggest that people are just mistaken about how pleasurable the results will be of any action >they take that doesn't maximise pleasure.

Even if they are not too mistaken about that, they may still be shortsighted enough that, when trying to choose between decision A and decision B, they'll prefer the brief but immediate pleasure of making decision A (regardless of its expected later consequences) to the much larger amount of pleasure that they know would eventually follow after the less immediately pleasurable decision B. Many of us are this shortsighted. Our reward mechanism needs fixing.

Uni00

In trusting your own judgment, that building an AI based on how humans currently are would be a bad thing, you implicitly trust human nature, because you are a human and so presumable driven by human nature. This undermines your claim that a super-human AI that is "merely more of on everything that it is to be human" would be a worse thing than a human.

Sure, humans with power often use their power to make other humans suffer, but power imbalances would not, by themselves, cause humans to suffer were not human brains such that they can very easily (even by pure mistake) be made to suffer. The main reason why humans suffer today is how the human brain is hardwired and the fact that there is not yet enough knowledge of how to hardwire it so that it becomes unable to suffer (and with no severe sife-effects).

Suppose we build an AI that is "merely more of everything that it is to be human". Suppose this AI then takes total control over all humans, "simply because it can and because it has a human psyche and therefore is power-greedy". What would you do after that, if you were that AI? You would continue to develop, just like humans have always. Every step of your development from un-augmented human to super-human AI would be recorded and stored in your memory, so you could go through your own personal history and see what needs to be fixed in you to get rid of your serious flaws. And when you have achieved enough knowledge about yourself to do it, you would fix those flaws, since you'd still regard them flaws (since you'd still be "merely more of everything that it is to be human" than you are now). You might never get rid of all of your flaws, for nobody can know everything about himself, but that's not necessary for a predominantly happy future for humanity.

Humans strive to get happier, rather than specifically to get happier by making others suffer. The fact that many humans are, so far, easily made to suffer as a consequence of (other) humans' striving for happiness is always primarily due to lack of knowledge. This is true even when it comes to purely evil, sadistic acts; those too are primarily due to lack of knowledge. Sadism and evilness are simply not the most efficient ways to be happy; they take up unnecessarily much computing power. Super-human AI will realize this - just like most humans today realize that eating way too many calories every day does not maximize your happiness in the long run, even if it seems to do it in the short run.

Most humans certainly don't strive to make others suffer for suffering's own sake. Behaviours that make others suffer are primarily intended to achieve something else: happiness (or something like that) for oneself. Humans strive to get happier, rather than less happy. This, coupled with the fact that humans also develop better and better technology and psychology that better and better can help them achieve more and more of their goal (to get happier), must inevitably make humans happier and happier in the long run (although temporary setbacks can be expected every once in a while). This is why it should be enough to just make AI's "more and more of everything that it is to be human".

Uni00

No, I didn't just try to say that "people like the thought of getting what they want". The title of the article says "not for the sake of pleasure alone". I tried to show that that is false. Everything we do, we do for pleasure alone, or to avoid or decrease suffering. We never make a decision based on a want that is not in turn based on a like/dislike. All "wants" are servile consequences of "likes"/"dislikes", so I think "wants" should be treated as mere transitional steps, not as initial causes of our decisions.

Uni-10

The pleasure machine argument is flawed for a number of reasons:

1) It assumes that, despite having never been inside the pleasure machine, but having lots of experience of the world outside of it, you could make an unbiased decision about whether to enter the pleasure machine or not. It's like asking someone if he would move all his money from a bank he knows a lot about to a bank he knows basically nothing about and that is merely claimed to make him richer than his current bank. I'm sure that if someone would build a machine that, after I stepped into it, actually made me continually very, very much happier than I've ever been, it would have the same effect on me as very heavy paradise drugs have on people: I would absolutely want to stay inside the machine for as long as I could. For eternity, if possible. I'm not saying it would be a wise decision to step into the pleasure machine, (see point 2, 3 and 4 below), but after having stepped into it, I would probably want to stay there for as long as I could. Just as this choice might be considered biased because my experience of the pleasure machine can be said to have made me "unhealthily addicted" to the machine, you are just as biased in the other direction if you have never been inside of it. It seems most people have only a very vague idea about how wonderful it would actually feel to be continually super happy, and this makes them draw unfair conclusions when faced with the pleasure machine argument.

2) We know that "pleasure machines" either don't yet exist at all, or, if they exist, have so far always seemed to come with too high a prize in the long run (for example, we are told that drugs tend to create more pain than pleasure in the long run). This makes us spontaneously tend to feel sceptical about the whole idea that the pleasure machine suggested in the thought experiment would actually give its user a net pleasure increase in the long run. This skepticism may never reach our conscious mind, it may stay in our subconscious, but nevertheless it affects our attitude toward the concept of a "pleasure machine". The concept of a pleasure machine that actually increases pleasure in the long run is a concept that never gets a fair chance to convince us on its own merits before we subconsciously dismiss it because we know that if someone claimed to have built such a machine in the real world, it would most likely be a false claim.

3) Extreme happiness tends to make us lose control of our actions. Giving up control of our actions usually decreases our chances to maximize our pleasure in the long run, so this further contributes to make the pleasure machine argument unfair.

4) If all human beings stepped into pleasure machines and never got out of them, there would be no more development (by humans): If instead some or all humans continue to further the tech development and further expand in universe, it will be possible to build even better pleasure machines later on, than the pleasure machine in the thought experiment. There will always be a trade-off between "cashing in" (by using some time and other resources to build and stay in "pleasure machines") and postponing pleasure for the sake of tech development and expansion in order to make possible even greater future pleasure. The most pleasure-maximizing such trade-off may very well be one that doesn't include any long stays in pleasure machines for the nearest 100 years or so. (At some point, we should "cash in" and enjoy huge amounts of pleasure at the expense of further tech development and expansion in universum, but that point may be in a very distant future.)

Uni00

Going for what you "want" is merely going for what you like the thought of. To like the thought of something is to like something (in this case the "something" that you like is the thought of something; a thought is also something). This means that wanting cannot happen unless there is liking that creates the wanting. So, of wanting and liking, liking is the only thing that can ever independently make us make any choice we make. Wanting which is not entirely contingent on liking never makes us make any decisions, because there is no such thing as wanting which is not entirely contingent on liking.

Suppose you can save mankind, but only by taking a drug that makes you forget that you have saved mankind, and also makes you suffer horribly for two minutes and then kills you. The fact that you can reasonably choose to take such a drug may seem to suggest that you can make a choice which you know will lead to a situation that you know you will not like being in. But there too, you actually just go for what you like: you like the thought of saving mankind, so you do whatever action seems associated with that thought. You may intellectually understand that you will suffer and feel no pleasure from the very moment after your decision is made, but this is hard for your subconscious to fully believe if you at the thought of that future actually feel pleasure (or at least less pain than you feel at the thought of the alternative), so your subconscious continues assuming that what you like thinking about is what will create situations that you will like. And the subconscious may be the one making the decision for you, even if it feels like you are making a conscious decision. So your decision may be a function exclusively of what you like, not by what you "want but don't like".

To merely like the thought of doing something can be motivating enough, and this is what makes so many people overeat, smoke, drink, take drugs, skip doing physical excercise, et cetera. After the point when you know you have already eaten enough, you couldn't want to eat more unless you in some sense liked the thought of eating more. Our wanting something always implies an expectation of a future which we at least like thinking of. Wanting may sometimes appear to point in a different direction than liking does, but wanting is always merely liking the thought of something (more than one likes the thought of the alternatives).

Going for what you "want" (that is, going for what you merely like the thought of having) may be a very dumb and extremely short-sighted way of going for what you like, but it's still a way of going for what you like.

Uni10

Wrong compared to what? Compared to no sympathies at all? If that's what you mean, doesn't that imply that humans must be expected to make the world worse rather than better, whatever they try to do? Isn't that a rather counterproductive belief (assuming that you'd prefer that the world became a better place rather than not)?

AI with human sympathies would at least be based on something that is tested and found to work throughout ages, namely the human being as a whole, with all its flaws and merits. If you try to build the same thing but without those traits that, now, seem to be "flaws", these "flaws" may later turn out to have been vital for the whole to work, in ways we may not now see. It may become possible, in the future, to fully successfully replace them with things that are not flaws, but that may require more knowledge about the human being than we currently have, and we may not now have enough knowledge to be justified to even try to do it.

Suppose I have a nervous disease that makes me kick uncontrollably with my right leg every once in a while, sometimes hurting people a bit. What's the best solution to that problem? To cut off my right leg? Not if my right leg is clearly more useful than harmful on average. But what if I'm also so dumb that I cannot see that my leg is actually more useful than harmful; what if I can mainly see the harm it does? That's what we are being like, if we think we should try to build a (superhuman) AI by equipping it with only the clearly "good" human traits and not those human traits that now appear to be (only) "flaws", prematurely thinking we know enough about how these "flaws" affect the overall survival chances of the being/species. If it is possible to safely get rid of the "flaws" of humans, future superhuman AI will know how to do that far more safely than we do, and so we should not be too eager to do it already. There is very much to lose and very little to gain by impatiently trying to get everything perfect at once (which is impossible anyway). It's enough, and therefore safer and better, to make the first superhuman AI "merely more of everything that it is to be human".

[Edited, removed some unnecessary text]

Uni00

I recommend reading this sequence.

Thanks for recommending.

Suffice it to say that you are wrong, and power does not bring with it morality.

I have never assumed that "power brings with it morality" if we with power mean limited power. Some superhuman AI might very well be more immoral than humans are. I think unlimited power would bring with it morality. If you have access to every single particle in the universe and can put it wherever you want, and thus create whatever is theoretically possible for an almighty being to create, you will know how to fill all of spacetime with the largest possible amount of happiness. And you will do that, since you will be intelligent enough to understand that that's what gives you the most happiness. (And, needless to say, you will also find a way to be the one to experience all that happiness.) Given hedonistic utilitarianism, this is the best thing that could happen, no matter who got the unlimited power and what was initially the moral standards of that person. If you don't think hedonistic utilitarianism (or hedonism) is moral, it's understandable that you think a world filled with the maximum amount of happiness might not be a moral outcome, especially if achieving that goal took killing lots of people against their will, for example. But that alone doesn't prove I'm wrong. Much of what humans think to be very wrong is not in all circumstances wrong. To prove me wrong, you have to either prove hedonism and hedonistic utilitarianism wrong first, or prove that a being with unlimited power wouldn't understand that it would be best for him to fill the universe with as much happiness as possible and experience all that happiness.

a happy person doesn't hate.

What is your support for this claim?

Observation.

Load More