All of righteousreason's Comments + Replies

"Walking on the moon is power! Being a great wizard is power! There are kinds of power that don't require me to spend the rest of my life pandering to morons!"

If you are solving an equation, debugging a software system, designing an algorithm, or any number of other cognitive tasks, understanding the methods of rationality involved in interacting with other people will be of no use to you (unless it just happens to be that some of the material applies across the domains). These are things that have to be done in and of yourself.

If you solve the equation, but don't get your results published in a top paper, do you win? If you debug the software, but sell it for half it's worth, do you win? If you fail to get the recognition for your work and your boss takes all the credit, do you win? Humans are social animals, we live in a social society. In almost any task, you accomplish more, acquire more rewards, are better set up for the next task with a series of interpersonal skills. Life is not discrete little pieces. Most people (maybe even everyone) who considers applying to this program are much better at seeing the true state of a program than seeing the true state of a party, much better at manipulating the program than the party, and due to law of the instrument see one set of skills as more valuable than the other. In reality both are needed.
That makes sense, yes. It still strikes me as a bit of an artificial compartmentalisation. As cousin_it has just noted, the world is frequently not so obliging []. Being good at a technical task is nice, but not caring to be good at its complementary tasks which involve humans is a good way to lose.

It appears that the majority of the activities and the primary focus of this boot camp is on rationality when interacting with others... social rationality training. While some of this may apply across domains, my interest is strictly in "selfish" rationality... the kind of rationality that one uses internally and entirely on your own. So I don't really know if this would be worth the grandiose expense of 10 "all-consuming" weeks. Maybe it would help if I had more information on the exact curriculum you are proposing.

What goals do you have in mind that do not involve interaction of any sort with other humans?

where the hell would you find a group like that!?

It's not hard since I live in the bay area.

well in that case, can you explain that emoticon (:3)? I have yet to hear any explanation that makes sense :)

Sure. The cat face emoticon is a reference to an anime trope. When a character is being deliberately mischievous, or slightly bad in some way, they're often shown with the "cat face" (If you want to see an example, go to the Banned Wiki and search "cat smile". I daren't link there. ). It was adopted as an emoticon since the "mouth" of the cat face is essentially a sideways 3. In the west it is usually used to indicate that one is joking lightheartedly, using a bad pun, or alternately, to indicate that one isn't really trying to troll.

Is this really relevant ...

Hey, we're trying to get less wrong here.... :3

Does anyone know if Blink: The Power of Thinking Without Thinking is a good book? Review

Blink is about the first two seconds of looking--the decisive glance that knows in an instant. Gladwell, the best-selling author of The Tipping Point, campaigns for snap judgments and mind reading with a gift for translating research into splendid storytelling. Building his case with scenes from a marriage, heart attack triage, speed dating, choking on the golf course, selling cars, and military m... (read more)

The Harding hate is sadly predictable. Harding is so abused by people who nothing about the man. Historians hate him because they have a bias toward hyperactive presidents like TR and FDR. Yes, Harding was prone to verbal gaffes, and had a few scandals, but he was basically a solid leader, ahead of his time in many ways, like in civil rights.
I liked it. The promotional material and summaries of it don't do justice to the content, I think, though. The book has many examples of how people who are experts at things can make good snap judgments in their domains of expertise, but it is not about how any normal person can make great decisions without thinking about them. Also, Malcolm Gladwell could write a cookbook and make it the most entertaining thing you'll read all year.
I haven't read it, so I can't comment directly on it. But you should probably know that Gladwell has been criticized a lot for un-scientific methodology and for turning interesting anecdotes and "just-so" stories into generalizations and supposed "laws" (without much evidence). The most recent example of high profile criticism of Gladwell is probably this review by Steven Pinker: Malcolm Gladwell, Eclectic Detective [] I don't know if this criticism applies to Blink, though, but if you read it, your BS detector should probably be turned up a notch.
I've got an audio copy and have listened to it several times. It's definitely worth a look. I enjoyed it more than 'tipping point' but I did read blink first.
I enjoyed Blink. You can read some essays by the author here [] - if you get a lot out of them, you'll probably react similarly to the book.

If you actually look a little deeper into cryonics you can find some more useful reference classes than "things promising eternal (or very long) life"

  1. Cells and organisms need not operate continuously to remain alive. Many living things, including human embryos, can be successfully cryopreserved and revived. Adult humans can survive cardiac arrest and cessation of brain activity during hypothermia for up to an hour without lasting harm. Other large animals have survived three hours of cardiac arrest n

... (read more)

As a question for everyone (and as a counter argument to CEV),

Is it okay to take an individual human's rights of life and property by force as opposed to volitionally through a signed contract?

And the use of force does include imposing on them without their signed volitional consent such optimizations as the coherent extrapolated volition of humanity, but could maybe(?) not include their individual extrapolated volition.

A) Yes B) No

I would tentatively categorize this as one possible empirical test for Friendly AI. If the AI chooses A, this could to an Unfriendly AI which stomps on human rights, which would be Really, Really Bad.

Whatever happened to Nick Hay, wasn't he doing some kind of FAI related research?

He is at Berkeley working under Stuart Russell (of AI: A Modern Approach, among other things).

Sure, but it's also reasonable for him to think that contributing something that was much harder would be that much more of a contribution to his goal (whatever those selfish or non-selfish goals are), after all, something hard for him would be much harder or impossible for someone less capable.

I don't see how this reveals his motive at all. He could easily be a person motivated to make the best contributions to science as he can, for entirely altruistic reasons. His reasoning was that he could make better contributions elsewhere, and it's entirely plausible for him to have left the field for ultimately altruistic, purely non-selfish reasons.

And what is it about selfishness exactly that is so bad?

"And what is it about selfishness exactly that is so bad?"

It's fine and dandy in me, but I tend to discourage it in other people. I find that I get what I want faster that way.

Now give me some cash.

He may have, for his own reasons, not been happy with the ease with which he achieved something great. His selfishness at this point is not for the fact that he may still be able to contribute to the field and yet he chooses not to but for the fact that he will be happier if he had to work harder on something before achieving greatness. That is his value system. I think his choice is justifiable.

If making a major contribution seemed so easy, and would be harder in some other field, it sure would suggest that his comparative advantage in the easy field is much greater; would not that suggest that he ought to devote his efforts there, since other people have proven relatively capable in the harder fields?

"the quality of being selfish, the condition of habitually putting one's own interests before those of others" - wiktionary I can imagine a super giant mega list of situations where that would be bad, even if selfishness is often a good thing. There's a reason 'selfishness' has negative connotations.

And this is a great follow up:

"Very recently - in just the last few decades - the human species has acquired a great deal of new knowledge about human rationality. The most salient example would be the heuristics and biases program in experimental psychology. There is also the Bayesian systematization of probability theory and statistics; evolutionary psychology; social psychology. Experimental investigations of empirical human psychology; and theoretical probability theory to interpret what our experiments tell us; and evolutionary theory to exp

... (read more)

"But goodness alone is never enough. A hard, cold wisdom is required for goodness to accomplish good. Goodness without wisdom always accomplishes evil." - Robert Heinlein (SISL)

4Eliezer Yudkowsky13y
Never? Always? Hogwash. Aside from that, yes.

That reminds me of "counting doubles" from Ender's Game: 2, 4, 8, 16 ... etc until you lose track.

The problem is that with a systematic enough approach, verifying that you've memorized the working data at each step until you do, it's possible to keep on going to 2^50 and beyond, losing all day on the activity :-)

==Re comments on "Singularity Paper"== Re comments, I had been given to understand that the point of the page was to summarize and cite Eliezer's arguments for the audience of ''Minds and Machines''. Do you think this was just a bad idea from the start? (That's a serious question; it might very well be.) Or do you think the endeavor is a good one, but the writing on the page is just lame? --User:Zack M. Davis 20:19, 21 November 2009 (UTC)

(this is about my opinion on the writing in the wiki page)

No, just use his writing as much as possible- direct... (read more)

Eliezer is arguing about one view of the Singularity, though there are others. This is one reason I thought to include on the wiki. If leaders/proponents of the other two schools could acknowledge this model Eliezer has described of there being three schools of the Singularity, I think that might lend it more authority as you are describing.

Actually, I might prefer not to use the term 'Singularity' at all, precisely because it has picked up so many different meanings. If a name is needed for the event we're describing and we can't avoid that, use 'intelligence explosion'.

I found the two SIAI introductory pages very compelling the first time I read them. This was back before I knew what SIAI or the Singularity really was, as soon as I read through those I just had to find out more.

I thought similarly about LOGI part 3 (Seed AI). I actually thought of that immediately and put a link up to that on the wiki page.

"Oh, dear. Now I feel obliged to say something, but all the original reasons against discussing the AI-Box experiment are still in force...

All right, this much of a hint:

There's no super-clever special trick to it. I just did it the hard way.

Something of an entrepreneurial lesson there, I guess."

I know that part. I was hoping for a bit more...


I mean come on, that's a cheap, weak analogy. I haven't finished yet but I'm compiling all of the good quotes from Atlas Shrugged. The book is full of these awesome quotes and truths that are portable to many other subjects of rationality.

It is far more real and relevant than you are giving it credit for.

what the hell?

What does the cultish behavior of followers have to do with the actual content? Affective death spirals can characterize virtually any group. Idiots and crazies are everywhere.

Why is this so down rated??

I realize that you didn't vote it down, but using this logic to vote it down would be something like a reverse affective death spiral- you let the visibly obvious ADS cast a negative halo on the entire philosophy, and thus become irrationally biased against the legitimate value in the center of the ADS that got blown up by the over-zealous crazies and idiots.

I love Atlas Shrugged; it's a beautiful novel that's definitely worth reading, but I downvoted your original comment because it fails to recognize that Rand really was terrible at rationality, compared to what we know now. (ChronoDAS's analogy is perfect.) I agree that Atlas Shrugged is full of inspiring prose praising the ideals of reason and recognition of objective reality---but actually recognizing objective reality requires a certain, well, empiricism that Rand just utterly fails at. I mean---laissez-faire capitalism follows deductively from the law of identity and the choice to live? What? Sorry. "Nobody stays here by faking reality in any manner whatever."
Sorry, I should clarify. Calling Atlas Shrugged "the greatest book yet written" made it seem like you were promoting the closed-system Objectivism that is so easy to despise. The first two paragraphs of my comment were regarding this. The last was paragraph about the content of the book itself. Indeed, I am saying "I can't downvote this book because it can help a new rationalist (which is the point of this thread), even though its proponents have been crashingly wrong and annoying (myself included)." The actual down voters may just not like the book or think that Atlas Shrugged is not that useful for rationalists.
Worthiness of the Cause does not mean you can spend any less effort in resisting the cult attractor []. You clearly went overboard with praise, which is a valid warning signal. The book itself is not that great [], even if the cultish behavior [] of some of its followers is even worse.
Well, I didn't think that Atlas Shrugged was very good (way, way, way too long), let alone the "greatest book yet written". But she certainly did do a good job of cataloging the crazy anti-Enlightenment, anti-reason philosophies that seemed to be sweeping the globe at the time of her writing.

Reading Ayn Rand to learn about rationality is like reading Aristotle to learn about physics.

I suppose this is a parody? The affective death spiral [] that characterizes most of Objectivist thought is well-documented and discussed, so there's no need to go through all that again... However, I will say that reading Atlas Shrugged at age 15 did lead me, eventually, to bigger and better topics in rationality. So, I can't in good faith down vote the recommendation