When talking about the singularity, BMI's etc. I haven't really seen thoughts about our what our moral decisions will be like. When listening to i.e. Kurzweil, he seems to assume the extreme intelligence will make us all completely rational and want the best for everyone and ourselves, that psychopaths will stop being psychopaths and sadists will stop being sadists. I'm not saying there isn't any ground for this, but I still worry about it because I think there will be a time window of an unstable society where there are people with extremely enhanced cognitive capabilities but no laws to prevent any of them going rogue, misusing their power etc. Will all humans being super-Einsteins prevent a sadist from making a simulation where beings are tortured? Or from trying to take over the world and torture us all for eternity? Has there been any good research for this? Because it seems like the most crucial aspect of whether the singularity will go well or not. I still think it seems like society should advance slow in the early stages of BMI's and bioengineering of brains, to not let anyone get too far ahead so the good hearted smart people have time to adapt the laws, add more control and find out methods to prevent bad scenarios from happening.

New Comment
13 comments, sorted by Click to highlight new comments since: Today at 4:28 PM

This is known as the orthogonality thesis, that intelligence and rationality don't dictate your values. I don't have time right now to explain the whole thing but it's talked about extensively in the sequences if you want to read more. I think it's pretty widely accepted around here as well.

My "intuition pump" is to imagine a superintelligent gigantic spider. Not some alien with human values in a spider body, but actual spider that was 'magically' increased and given IQ 500.

The Orthogonality Thesis tag is a good place to start.

I don't think that's what I meant, isn't the orthogonality thesis about AI only? Like if we have a superintelligent AI that there's no reason its morals will be good unless we instruct it to do so, I'm talking about you and me. If our brains got a huge boost and we'd all become twice as smart as Einstein was, will the people that are psychopaths now stop being psychopaths then

I think it scales, and applies to any type of intelligence.  It doesn't seem that more intelligent humans are particularly more altruistic (though they tend to be richer, so less obvious about their motivations).  There's no reason (that I see) to think that even further intelligence would make humans more likely to care about the less-intelligent groups more (or less) than they do now.

Well I think it really does and that's what the study I link says. Let's say a sadist enjoys torturing others. If (through the boosted intelligence) he truly understood the potential a successful singularity and AI revolution could bring with it, why would he risk screwing up the singularity going well or risk incriminating himself if he could also know he can get 100 times the pleasure through other means than torture will give him? Isn't this a very obvious reason why altruism and intelligence are linked? He might still be evil, but with sufficient intelligence (a super Einstein like I described), the intelligence surely overarches your sadistic, sexual etc. impulses by logic since you know you can later get 1000x the impulses through other means.

The orthogonality thesis is usually used with AI, because that topic is where it actually matters, but the overarching idea applies to any mind. Making something smarter does not give it morals.

And no, I bet that the psychopaths would use their newfound powers to blend in and manipulate people better. Overt crime would drop, and subtler harm would go up. That's what happens in the real world across the real intelligence gradient.

I'm not a sociopath, but I was a sociopath-lite before transitioning (minimal emotion, sadistic streak, almost no empathy). I once sat and listened to my girlfriend pour her heart out in extreme emotional pain and I just did not care. I wanted her to shut up and let me get back to my game. She was annoying.

Telling 2016!raven to reason her way into morals is like if I told you to reason your way into seeing gamma rays. It's just not gonna happen. Sure, you can approximate it, but that's not the same.

A psychopath can restrain themselves if there's a reason (like a threat of jail) but making them smarter reduces the need to hide. If you want them to do good, you need to fix their mind -- in my case, that meant correcting my fucked up hormone system. I have no idea where to even start for a real psychopath, but there's no reason to think that mere intelligence would help.

A psychopath can restrain themself if there's a reason

That's exactly what I mean. The reason in this case is a comprehension of sustained reward. A monkey doesn't sustained reward. If you gave it a peanut, but tell it if it waits for 5 minutes he can have 10 peanuts, he wouldn't understand and just eat the peanut. With intelligence a greater understanding of potential reward comes and we're less easily giving in to sexual, sadistic and other impulses. This can be seen through biology like I mentioned, we're more in control of our biological drives than any other creature. Now what I obviously didn't say is that intelligence magically makes someone non-evil or non-psychopathic, it's just that why would a psychopath in a sort of singularity scenario who is like a super Einstein risk getting incriminated for if he has a true understanding (way better than any of us) of what a successful singularity could bring (Kurzweilian scenarios). I mean the psychopath or sadist would just realise through enhanced reason (like they probably already do) that their behaviour is inherently wrong even though they enjoy it, and edit their brain so they don't want it anymore. I think the flaw the 'orthogonality thesis' has is that it assumes people (or AI) become non-evil through extra intelligence, they just become more altruistic.

One concept in my moral system relies on the question of how you would respond to permanent retaliation, if you would go rogue. Could you stop an endless attack on your wellbeing because you do things that other people hate? In a world with many extremely intelligent beings this could be very difficult, and even in a world with only you as the bad Super-Einstein it would at least be tiresome (or resource-inefficient), so one super intelligent individual would possibly prefer a situation where they do not need to defend themselves indefinitely. This is kind of similar to the outcome of Wait-But-Why's concept of the cudgel (browser search for "cudgel"). Ultimately this concept relies heavily on having at least some possibility of giving a Super-Einstein a small but ineradicable pain. So in my opinion, it is not really applicable to a singularity event. But it could be useful for slower developments.

And I think the orthogonality thesis on biological subjects just doesn't work out. As long as you feel pain and joy, you know pain is bad and joy is good, that's encoded in our biology. Now you might say 'hey that's not a fact' like one in physics is true. But a valid response is: let anyone who denies pain be whipped for as long as they deny pain is bad. I do think as long as you still experience it yourself, you know pain is bad and pleasure is good, I think a lot of psychos and sadists also know their behaviour is bad cause they wouldn't want it done on themselves (although they might have a hard time admitting it). The 'orthogonality thesis' works on a hypothetical AGI since it doesn't feel pain or joy, but we do, it's encoded in our biology and we experience both every minute. Do people against the orthogonality thesis have to defend a stance of 'pain is an inherently bad universal thing and this is encoded in the nature of reality' or something? Cause that's just an unwinnable argument, n'either do I think this. But does any subject/ person that experiences it think it's inherently bad? I do, and the proof is that no biological creature could ever deny it.

Pain can also be defined for non-biological beings. For me it is just a word indicating something undesirable hardwired into your being. And maybe there is something undesirable for everything in the universe. One rather metaphysical concept could be a virtue of inertia (described as the resistance of any physical object to any change in its velocity). So you could argue, if you understand the movement of an entity (more concretely its goals), you could find a way to harm it (with another movement) which would result in "pain" for the entity. This concept is still very anthropozentric, so I am not sure, if the change in the movement could lead to or already be understood as a positive outcome for humanity. Or maybe it is not registered at all.

Right, while I actually agree with this, it's not (yet) provable by the scientific method or something so you're just talking about preferences. Either way my point was very much about an actual scenario, let's say if the superrich or powerful get a hold of an intelligence twice as high as ours (which is a flawed way of thinking and very unlikely to happen for all sorts of reasons, I think the rollout of cognitive improvements beginning in maybe 15-30 years will happen gradually and en masse) would there be an incentive or motivation to become dictators? Or would they want to share the wealth. I do think they would choose sharing the wealth and brain improvements everytime.

So I guess I can rephrase my title to: A link between altruism and intelligence as long as there is a chance higher than 0% the mistreatment of others could backfire.

New to LessWrong?