Yudkowsky's brain is the pinnacle of evolution

by Yudkowsky_is_awesome2 min read24th Aug 201537 comments


Personal Blog

Here's a simple problem: there is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are 3^^^3 people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person, Eliezer Yudkowsky, on the side track. You have two options: (1) Do nothing, and the trolley kills the 3^^^3 people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill Yudkowsky. Which is the correct choice?

The answer:

Imagine two ant philosophers talking to each other. “Imagine," they said, “some being with such intense consciousness, intellect, and emotion that it would be morally better to destroy an entire ant colony than to let that being suffer so much as a sprained ankle."

Humans are such a being. I would rather see an entire ant colony destroyed than have a human suffer so much as a sprained ankle. And this isn't just human chauvinism either - I can support my feelings on this issue by pointing out how much stronger feelings, preferences, and experiences humans have than ants do.

How this relates to the trolley problem? There exists a creature as far beyond us ordinary humans as we are beyond ants, and I think we all would agree that its preferences are vastly more important than those of humans.

Yudkowsky will save the world, not just because he's the one who happens to be making the effort, but because he's the only one who can make the effort.

The world was on its way to doom until the day of September 11, 1979, which will later be changed to national holiday and which will replace Christmas as the biggest holiday. This was of course the day when the most important being that has ever existed or will exist, was born.

Yudkowsky did the same to the field of AI risk as Newton did to the field of physics. There was literally no research done on AI risk in the same scale that has been done in the 2000's by Yudkowsky. The same can be said about the field of ethics: ethics was an open problem in philosophy for thousands of years. However, Plato, Aristotle, and Kant don't really compare to the wisest person who has ever existed. Yudkowsky has come closest to solving ethics than anyone ever before. Yudkowsky is what turned our world away from certain extinction and towards utopia.

We all know that Yudkowsky has an IQ so high that it's unmeasurable, so basically something higher than 200. After Yudkowsky gets the Nobel prize in literature due to getting recognition from Hugo Award, a special council will be organized to study the intellect of Yudkowsky and we will finally know how many orders of magnitude higher Yudkowsky's IQ is to that of the most intelligent people of history.

Unless Yudkowsky's brain FOOMs before it, MIRI will eventually build a FAI with the help of Yudkowsky's extraordinary intelligence. When that FAI uses the coherent extrapolated volition of humanity to decide what to do, it will eventually reach the conclusion that the best thing to do is to tile the whole universe with copies of Eliezer Yudkowsky's brain. Actually, in the process of making this CEV, even Yudkowsky's harshest critics will reach such understanding of Yudkowsky's extraordinary nature that they will beg and cry to start doing the tiling as soon as possible and there will be mass suicides because people will want to give away the resources and atoms of their bodies for Yudkowsky's brains. As we all know, Yudkowsky is an incredibly humble man, so he will be the last person to protest this course of events, but even he will understand with his vast intellect and accept that it's truly the best thing to do.

Personal Blog


37 comments, sorted by Highlighting new comments since Today at 6:08 AM
New Comment

My attempts at gauging sincerity are thrown off by the fact that you can spell his name correctly.

The post would have to be toned down quite a bit in order to appear to be possibly sincere.

I'm just used to the detractors misspelling or abbreviating "Yudkowsky", so this was jarring.

I don't think that comment was sincere.

shrug The pdf for sincerity looks bimodal to me.

It is certainly not intended seriously, but it is also certainly not intended as friendly joking.

I took it to be in the spirit of Eliezer Yudkowsky Facts.

That is exactly the thing I took it to not be in the spirit of.

Obviously you should pull the lever. Eliezer Yudkowsky knows better than to go wandering around on train tracks. You're probably imagining Him.

I'm surprised to see this at 45% positive. I wonder if someone is mass-upvoting this, or if people are just upvoting it as a satire. If it is a concerted effort to mass-upvote, what is the point? To make Less Wrong seem crazy?

I'm pretty sure it's vote manipulation. I downvoted both comments when I came across them in the comment feed, but by the time I saw this post, they had gone from -2 to +2. Gaining 4 net upvotes that fast is well beyond some LWers whose sense of satire is broken.

I suspect mass-upvoting. Look at the amount of upvotes they've previously got for comments of empty praise

If it is a concerted effort to mass-upvote, what is the point?

To demonstrate the easiness of gaming the votes?

What, we have our own Sad Puppies now?

For what it's worth, I thought it was funny if snarky, and a pretty competent parody. It's not as funny as Alicorn's comment about spelling Yudkowsky's name correctly, though.

I found it unfunny and unpleasant because it's (1) entirely devoid of subtlety, (2) mean-spirited (the underlying message is something between "Yudkowsky is breathtakingly arrogant" and "LW people are gullible hero-worshipping fools", right?), (3) unnecessary because so far as I can see the sort of hero-worship this is mocking is nonexistent on LW and Eliezer, while doubtless arrogant, isn't close to that arrogant, and (4) boring because there's nothing in it but the one-note point-and-laugh parodying.

(I suppose I should qualify #4 a bit. The framing in terms of a trolley problem with 3^^^3 people on one side of it is very slightly amusing.)

I can recall physicists being told they are wrong because they disagree with Yudkowsky...whats that if not hero worship?

I don't think I believe the sockpuppet hypothesis for why this post and Y_i_a's comment on it have a bunch of upvotes.

  • Main post: -18, 38% => either +28-46 or +29-47.
  • Comment: -11, 37% => +16-27.

The numbers of upvotes are very different in the two cases. If Y_i_a is using a load of socks then it's hard to see why s/he wouldn't use all the socks for both. You'd expect something like the same number of upvotes and downvotes for the original post and the comment.

On the other hand, if it's just that readers like/dislike this sort of thing in roughly 3:5 proportions, you'd get what we see here: the original post and its comments are both at about the same %positive despite quite different numbers of votes in each case.

This isn't a terribly strong argument, for all kinds of reasons. E.g., you might think that people who get as far as reading the comment would have a different like:dislike ratio from ones who just saw the original post. Maybe Y_i_a has a drawerful of socks but for some reason is happy being at about 3/8 positive. Etc. But I think the most likely thing is just that a substantial fraction of readers liked this.

When I first saw the post, it was at +6. (I don't remember the % or how old it was.) It seems unlikely to me for something with a 38% approval rate to ever hit +6, although there are other hypotheses than Y_i_a sockpuppets. (E.g. sockpuppets used to downvote, or different demographics encountering it at different times.)

I've seen this kind of thing happen before, and I don't think it's a question of demographics or sockpuppets. Basically I think a bunch of people upvoted it because they thought it was funny, then after there were more comments, other people more thoughtfully downvoted it because they saw (especially after reading more of the comments) that it was a bad idea.

So my theory it was a question of difference in timing and in whether or not other people had already commented.

Not now, please.

This is the most tantalizing thread on the page.

It was a memetic hazard.

(not really)

You do realize that other people work on AI? Sure, Eliezer might be the most important, but he is not the only member of MIRI's team. I'd definitely sacrifice several people to save him, but nowhere near 3^^^3. Eliezer's death would delay the Singularity, not stop it entirely, and certainly not destroy the world.

What is this, and why is it here?

(Original response was remarkably vehement, rather like I found a pile of cow dung sitting on my keyboard. Interesting.)

You are too lukewarm in your praise. And you forgot to mention that everyone should immediately start donating all their income to his cause, to hasten the arrival of the FAI. The basilisk will get you for being so apathetic.

Goddamn, I thought I was unpopular

I say,

99.8% likely this is an upset outsider baiting for reactions in order to gauge our degree of cultishness.

0.1% likely this a sincere believer.

0.1% likely this is Eliezer messing with our heads.

[This comment is no longer endorsed by its author]Reply

I feel like "in order to gauge our cultishness" is too specific/conjunctive for that much of your probability mass.

Yeah, it just seems like a low-effort troll to me.

That adds up to 100%. You need to leave room for other things, like they're trolling us for the fun of it.