Sailor Vulcan

I am Sailor Vulcan--champion of justice and reason! In the name of the Moon--uh, I mean...

Hi, I'm Harry. I'm a reader, writer and gamer with a passion for rationality and existential risk prevention. I sometimes jokingly compare my life to an intelligence explosion.

Also, I have a communication learning disability, so if I ever say or do anything to make you feel upset or uncomfortable feel free to let me know (although you don't have to).

Wiki Contributions

Comments

1. On the deontology/virtue ethics vs consequentialism thing, you're right I don't know how I missed that, thanks!

1a. I'll have to think about that a bit more.

2. Well, if we were just going off of the four moralities I described, then I already named two examples where two of those moralities are unable to converge: a pure flourishing maximizer wouldn't want to mercy kill the human species, but a pure suffering minimizer would. A pure flourishing maximizer would be willing to have one person tortured forever if that was a necessary prerequisite for uplifting the rest of the human species into a transhumanist utopia. A suffering minimizer would not. Even if the four moralities I described only cover a small fraction of moral behaviors, then wouldn't that still be a hard counterexample to the idea that there is convergence?

3. I think when you said "within the normal range of generally-respected human values", I took that literally, meaning I thought it excluded values which were not in the normal range and not generally respected even if they are things like "reading Adult My Little Pony fanfiction". Not every value which isn't well respected or in the normal range would make the world a better place through its removal. I thought that would be self-evident to everyone here, and so I didn't explain it. And then it looked to me like you were trying to justify the removal of all values which aren't generally respected or within the normal range as being "okay". So when you said " Right now, there are no agents around (that we know of) whose values are entirely outside the range of human values, and we're getting on OK." I thought it was intended to be in support of the removal of all values which aren't well respected or in the normal range. But if you're trying to support the removal of niche values in particular, saying that current humans are getting along fine with their current whole range of values, which one would presume must include the niche values, does not make sense.

About to fall asleep, I'll write more of my response later.

1a. Deontology/virtue ethics is a special case of consequentialism. The reason for following deontological rules is because the consequences that result from following deontological rules almost always tend to be better than the consequences of not following deontological rules. The exceptions where it is wiser to not follow deontological rules are generally rare.

1b. Those are social mores, not morals. If a human is brainwashed into shutting down the forces of empathy and caring within themselves, then they can be argued into treating any social more as a moral rule.

2. Sorry I should have started that paragraph by repeating what you said, just to make it clear what I was responding to. I don't think the four moralities converge when everyone has more information because....

I will also note that while Ivan might adopt Maximize flourishing and/or Minimize Suffering on pragmatic (aka instrumental) grounds, Ivan is a human, and humans don't really have terminal values. If instead Ivan was an AI programmed with Eye-for-an-Eye, it might temporarily adopt Maximize Flourishing and/or Minimize Suffering as an instrumental goal, and then go back to Eye-for-an-Eye later.

3a. "Suppose that hope turns out to be illusory and there's no such thing as a single set of values that can reasonably claim to be in any sense the natural extrapolation of everyone's values." Those were your exact words. Now, if there is no such thing as a single set of values that are the natural extrapolation of everyone's values, then choosing a subset of everyone's values which are in the normal range of respected human values for the AI to optimize for would mean that all the human values that are not in the normal range would be eliminated. If the AI doesn't have a term for something in its utility function, it has no reason to let that thing waste resources and space it can use for things that are actually in its utility function. And that's assuming that a value like "freedom and self-determination for humans" is something that can actually be correctly programmed into an AI, which I'm pretty sure it can't because it would mean that the AI would have to value the act of doing nothing most of the time and only activating when things are about to go drastically wrong. And that wouldn't be an optimization process.

3b. "Either way, note that "that range" was the _normal range of respected human values_. Right now, there are no agents around (that we know of) whose values are entirely outside the range of human values, and we're getting on OK."

You just switched from "outside the normal range of respected human values" to "entirely outside the range of human values". Those are not at all the same thing. Furthermore, the scenario you described as "pretty good" was one where it stills turns out possible to make a superintelligence whose values are, and remain, within the normal range of generally-respected human values.

Within the normal range of generally-respected human values. NOT within the entire range of human values. If we were instead talking about a superintelligence that was programmed with the entire range of human values, rather than only a subset of them, then that would be a totally different scenario and would require an entirely different argument to support it than the one you were making.

1. Ask yourself, what sorts of things do we humans typically refer to as "morality" and what things do we NOT refer to as "morality"? There are clearly things that do not go in the morality bucket, like your favorite flavor of ice cream. But okay, what other things do you think go in the morality bucket and why?

2. Because a) the same sorts of arguments can be made in reverse. Just as Minnie or Maxie might come to accept Eye for an Eye on pragmatic grounds because it makes society as a whole better/less bad, Goldie might accept Maximize Flourishing and/or Minimize Suffering on the grounds that it helps create the conditions that make cooperative exchanges possible, and Ivan might come to accept Maximize Flourishing and/or Minimize Suffering because lots of people are being forced to endure consequences that are way out of proportion to any wrongs they might have committed, and that isn't a fair system in the sense that Eye for an Eye entails.

Also because b) there are cases where the four types of morality do not overlap. For instance, Pure Maximize Flourishing would say to uplift the human species, no matter what and no matter how long it takes. Pure Minimize Suffering says you should do this too except with the caveat that if you find a way to end humanity's suffering sooner than that, in a way that is fast and painless, you should do that instead. In other words, a pure Suffering Minimizer might try to mercy kill the human species, while a pure Flourishing Maximizer would not.

Furthermore, if you had a situation where you could either uplift all of the rest of the human species at the cost of one person being tortured forever or have the entire human species go extinct, a pure Flourishing Maximizer would choose the former, while a pure Suffering Minimizer would choose the latter.

3. And then all values outside of that range are eliminated from existence because they weren't included in the AI's utility function.

Except that for humans, life is a journey, not a destination. If you make a maximize flourishing optimizer you would need to rigorously define what you meant by flourishing, which requires a rigorous definition of a general human utility function, which doesnt and cannot exist. Human values are instrumental all the way down. Some values are just more instrumental than others--that is the mechanism which allows for human values to be over 4d experiences rather than 3d states. I mean, what other mechanism could result in that for a human mind? This is a natural implication of "adaptation executors not fitness maximizers".

And I will note that humans tend to care a lot about their own freedom and self determination. Basically the only way for an intelligence to be "friendly" is for it to first solve scarcity and then be inactive most of the time, only waking up to prevent atrocities like murder torture or rape or to deal with the latest existential threat. In other words, not an optimization process at all, because it would have an arbitrary stopping point where it does not itself raise human values any further.

good point, I missed that, will fix later. more likely that effect would result from programming the AI with the overlap between those utility functions, but I'm not totally sure so I'll have to think about it. I don't think that point is actually necessary for the crux of my argument, though. Like I said, I'll have to think about it. Right now it's almost 4am and Im really sick now.

In other words, people who win at offline life spend less time on the internet because they're devoting more time offline. And since rationalists are largely an online community rather than offline at least outside of the bay area, this results in rationalists dropping out of the conversation when they start winning. That's a surprisingly plausible alternative explanation. I'll have to think about this.

So everything we do in life is problem solving and therefore storytelling was originally a form of problem solving, and this explains the origin of storytelling how? This seems like saying "the sky is made of quarks, all matter is made of quarks. Therefore this explains the origins of the sky." But just saying "quarks!" doesn't tell you where the quarks are and where they're going and how far away they all are from each other in what directions. And the positions of all the many quarks involved are too many to keep track of them all individually with a human level intelligence in any reasonable time frame. Sure, in theory you could successfully predict the stock market by measuring the movements and positions of fundamental particles, but by the time you've actually finished all those measurements and made your prediction the stock market has already changed a billion times over and the universe has grown cold.

In short, your explanation of "it's all just problem solving!" doesn't really explain anything about the origin of stories in particular because it explains every other thing that a mind could ever do equally well.

Still, now I'm curious. What is the real origin of storytelling? I wonder if anyone has actually investigated this already. I'd be surprised if they haven't.

Has anyone else here read any studies on the subject?

In some societies it might not be considered socially acceptable to want to punish someone merely because what they are doing will raise their social status. That sort of thing is dishonest because social status is reputational and meant to be earned. If someone tries to punish you for doing something to earn status, they probably did not come by their social status by honest means.

In societies where people think like that, I imagine no one would want to say "this act of altruism will increase their status and so should be punished", because that is a low status motive and expressing it out loud will lower their own status. So instead they have to spin things to make their own motive appear higher status. They would need to frame things to make the altruist look as if they're the ones being dishonest and freeriding to get more social status than they've earned.

Hence "this act of altruism is only intended as a status move", meaning "this person is not genuinely altruistic, you should not trust them more or think any better of them as a result of this altruism because that's exactly what they want. They're manipulating you into giving them more social status with purely selfish motives, and therefore they will not hesitate to stop being altruistic if it becomes advantageous for them later."

A person making this claim might believe that they believe it, and believe that it is their real motive for punishing an altruist, whether or not it is. Because for one to admit that they're trying to damage another's reputation merely for the crime of doing something which improves their reputation would be to admit guilt of unvirtuous conduct oneself.

This. If less wrong had been introduced to an audience of self-improvement health buffs and business people instead of nerdy booksmart Harry Potter fans, things would have been drastically different. it is possible to become more effective at optimizing for other goals besides just truth. People here seem to naively assume so as long as they have enough sufficiently accurate information everything else will simply fall into place and they'll do everything else right automatically without needing to really practice or develop any other skills. I will be speaking more on this later.

Except that you're using "useful to believe" as a criteria for determining whether something is true or not. Also, if you had developed the skills, qualities, attitudes, and habits necessary to handle the truth in a sane and healthy manner, you wouldn't need to believe in a God, because you would know how to live with the knowledge that there is no God and not be broken by it. If you truly had developed the ability to handle the truth safely, it wouldn't matter what the truth was, you'd be able to handle it regardless. That is to say, if a God does not exist, you would be able to handle that just as well as if a God does exist.

Also, it's not very polite to deliberately take someone else's words out of context. I think you probably knew on some level what I actually meant by "skills, qualities, attitudes, and habits necessary to handle the truth in a sane and healthy manner," and you also probably know what I meant by "true". I'm not sure how someone could frequent this site without ever hearing about map-territory distinction. Correct me if I'm wrong, but map territory distinction is mentioned right on the front page of the site.

If you want others' cooperation in avoiding breaking through your cognitive dissonance about religion so that you don't get overwhelmed by grief or something, then just say you don't want to talk about it and no one will question you. Not everyone needs a belief in God to deal with their grief. Furthermore, trying to persuade grieving people to join your particular religion while they're in a vulnerable state of mourning would likely be seen as predatory in certain ways. You'd be taking advantage of someone's pain to trick them into believing and doing things they wouldn't normally believe or do if they weren't in a vulnerable state.

And those on this site who aren't religious and aren't currently grieving won't be convinced. They will see the flaws in your arguments and argue with you, which puts your precious belief at risk of falsification.

So really, trying to proselytize here is a lose-lose situation.

Load More