Working with others in a shared environment with scientific ground rules ensures that your biases and their biases form a non intersecting set
I liked your first point but come on here.
Lack of curiosity made people lose money to Madoff. This you already know - people did not due their due diligence.
Here's what Bienes, a partner of Madoff's who passed clients to him, said to the PBS interviewer for The Madoff Affair (before the 10 minute mark) when asked how he thought Madoff could promise 20%:
Bienes: ‘How do I know? How do you split an atom, I know that you can split them, I don’t know how you do it. How does an airplane fly? I don’t ask.’ ‘Did you ask him?’ ’Never! Why would I ask him? I wouldn’t understand it if he explained it!’
And a minute later: ‘Did you ever think to yourself, this is just too easy, this is too good?’ Bienes: ‘I said ‘I’m a little too lucky. Why am I so fortunate?’ And then I came up with the answer, my wife and I came up with the answer: ‘God wanted us to have this. God gave us this.’ '
Are you kidding me? I'm staring right now, beside me, at a textbook chapter filled with catalogings of human values, with a list of ten that seem universal, with theories on how to classify values, all with citations of dozens of studies: Chapter 7, Values, of Chris Peterson's A Primer In Positive Psychology.
LessWrong is so insular sometimes. Like lionhearted's post Flashes of Nondecisionmaking yesterday---as if neither he nor most of the commenters had heard that we are, indeed, driven much by habit (e.g. The Power of Habit; Self-Directed Behavior; The Procrastionation Equation; ALL GOOD SELF HELP EVER), and that the folk conception of free will might be wrong (which has been long established; argued e.g. in Sam Harris's Free Will).
If this comment was written by a bot that produces phrases maximizing the ratio of the number of usages of pleasant-dopamine-buzz-in-group LessWrong language to non-in-group language, it would produce something like this.
I say this even though I really appreciate the comment and think it has genuine insight.
If this gets upvoted highly, I will update in favor of LessWrong continuing to become more in-group-y, more cutesy, and less attached-to-actual-change-y. It's becoming so much delicious candy!
More like an exception handling routine that's just checking for out-of-bounds errors.
Oh God. I love this place.
And this is why I love LessWrong, folks--sometimes. In other rationality communities--ones that conceived of rationality as something other than "accomplishing goals well"--this kind of post would be hurrah'd.
“I'd probably suggest writing a novel first.”
It blows my mind that nobody (?) has written a sci-fi novel on alignment yet.