Raven

Posts

Sorted by New

Wiki Contributions

Comments

The Archetypal Rational and Post-Rational

There's often a complication in defining post-rationality where someone says "post-rationality involves X" and then the other person says "X is compatible with rationality too" and then the cycle repeats.

Thanks for writing this post. My first couple encounters with postrationality were exactly along these lines (where I played the part of saying "but X is part of rationality").

Unfortunately, I'm still confused. Both descriptions (of postrationality and archetypical rationality) line up about equally well with my conception of rationality, and yet the postrationalists I've met seem very insistent that there's a big difference. The only major differences I see are spirituality and philosophy. Maybe I'm just confused about what people think of as rationality? Or maybe philosophy/spiritualism are the big, important differences, and they only seem minor to me because I don't see either as valuable?

Still, it seems like I'm missing something.

Why do you need the story?

I don't have anything to add, but this phenomenon was discussed in greater detail in Explain/Worship/Ignore. https://www.lesswrong.com/posts/yxvi9RitzZDpqn6Yh/explain-worship-ignore

The Bat and Ball Problem Revisited

The first time I saw the bat and ball question, it was like there were two parts of my S1. The first one said "the answer is 0.1" and the second one said "this is a math problem, I'm invoking s2". S2 sees the math problem and searches for a formula, at which point she comes up with the algebraic solution. Then s2 pops open a notepad and executes it even though 0.1 seems plausible.

No real thought went into any step of this. I suspect the split reaction in the first bit was due to my extensive practice at doing math problems. After enough failures, I learned to stop using intuition to do math and "invoke s2" became an automatic response.

Why do you believe AI alignment is possible?

Humans aren't aligned once you break abstraction of "humans" down. There's nobody I would trust to be a singleton with absolute power over me (though if I had to take my chances, I'd rather have a human than a random AI).

Has LessWrong Been Mind-Killed on the Topic of God and Religion?

I found the post long and difficult to read, and the bit I did read appeared to be not interesting. I'm also strongly dislike religion (yes, I know it has good parts but it also has a shit ton of bad). In particular, I have no desire to see more content glorifying it here, and your post appeared to be doing that from the cursory inspection I gave it. Thus, the downvote.

I didn't strong downvote because I hadn't read the entire post or given it a proper chance. A weak downvote is minor dislike, a desire to see less of that sort of thing. A strong downvote for me is an expression of utter contempt, or a belief that the writing is irredeemably bad in some way. I didn't think you fell into the latter category.

In other words, it was mostly #1 but #3 was the nail in the coffin.

What is the link between altruism and intelligence?

The orthogonality thesis is usually used with AI, because that topic is where it actually matters, but the overarching idea applies to any mind. Making something smarter does not give it morals.

And no, I bet that the psychopaths would use their newfound powers to blend in and manipulate people better. Overt crime would drop, and subtler harm would go up. That's what happens in the real world across the real intelligence gradient.

I'm not a sociopath, but I was a sociopath-lite before transitioning (minimal emotion, sadistic streak, almost no empathy). I once sat and listened to my girlfriend pour her heart out in extreme emotional pain and I just did not care. I wanted her to shut up and let me get back to my game. She was annoying.

Telling 2016!raven to reason her way into morals is like if I told you to reason your way into seeing gamma rays. It's just not gonna happen. Sure, you can approximate it, but that's not the same.

A psychopath can restrain themselves if there's a reason (like a threat of jail) but making them smarter reduces the need to hide. If you want them to do good, you need to fix their mind -- in my case, that meant correcting my fucked up hormone system. I have no idea where to even start for a real psychopath, but there's no reason to think that mere intelligence would help.

What is the link between altruism and intelligence?

This is known as the orthogonality thesis, that intelligence and rationality don't dictate your values. I don't have time right now to explain the whole thing but it's talked about extensively in the sequences if you want to read more. I think it's pretty widely accepted around here as well.

Depositions and Rationality

I'm not sure about this. Arguing with myself can get really hostile as it is, and a lot of the OP seems to encourage an adversarial mindset.

On the other hand, I think there's definitely potential here. Generating ranges with upper and lower bound questions seems super useful, for instance.

The Opt-Out Clause

I did it, nothing happened.

The Opt-Out Clause

Well I tried it, and it didn't work... so I guess the answer is yes?

Load More