LESSWRONG
LW

350
rank-biserial
37621451
Message
Dialogue
Subscribe

To be fair, you have to have a very high IQ to understand HPMOR. the humor is extremely subtle, and without a solid grasp of theoretical physics most of the jokes will go over a typical reader's head. There's also Harry's rationalistic outlook, which is deftly woven into his characterisation- his personal philosophy draws heavily from 80s sci-fi literature, for instance. The fans understand this stuff; they have the intellectual capacity to truly appreciate the depths of these jokes, to realise that they're not just funny- they say something deep about LIFE. as a consequence people who dislike Harry Potter and the Methods of Rationality truly ARE idiots- of course they wouldn't appreciate, for instance, the humour in Harry's rationalistic action of snapping his fingers, which itself is a cryptic reference to Ernest Cline's Ready Player One. I'm smirking right now just imagining one of those addlepated simpletons scratching their heads in confusion as Eliezer Yudkowsky's genius wit unfolds itself on their television screens. What fools.. how I pity them :).

And yes, by the way, I DO have a CFAR membership card. And no, you cannot see it. It's for the ladies' eyes only- and even then they have to demonstrate that they're within 5 IQ points of my own (preferably lower) beforehand. Nothin personnel kid

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
19Nick Attains Aligntenment
3y
0
185Have You Tried Hiring People?
4y
117
My Assessment of the Chinese AI Safety Community
rank-biserial2y207

"That sounds really interesting. If a company working on the problem was paying a lot, I would consider jumping ship."

The Chinese stated preferences here closely track Western revealed preferences. Americans are more likely to dismiss AI risk post-hoc in order to justify making more money, whereas it seems that Chinese people are less likely to sacrifice their epistemic integrity in order to feel like a Good Guy, Hire people, and pay them money!

Reply
Writing That Provokes Comments
rank-biserial3y10

So that's how you draft scissor statements >:)

Reply
The horror of what must, yet cannot, be true
rank-biserial3y30

Yes. Too many cached decisions.

Reply
[Closed] Hiring a mathematician to work on the learning-theoretic AI alignment agenda
rank-biserial3y100

How did you choose the salary range?

Reply
My least favorite thing
rank-biserial3y20

Exactly!

Reply
My least favorite thing
rank-biserial3y50

My understanding here is that while this is true, it will discourage the 5%, who will just go work for FAANG and donate money to someone worse (or someone overwhelmed with work), simultaneously losing any chance at a meaningful job. The point being that yes, it's good to donate, but if everyone donates (since that is the default rat race route), noone will do the important work.

No! If everyone donates, there will be enough money to pay direct workers high salaries. I know this goes contra to the image of the selfless, noble Effective Altruist, but if you want shit to get done you should pay people lots of money to do it.

Reply
Lies Told To Children
[+]rank-biserial3y-90
My least favorite thing
rank-biserial3y20

Ok, sick. I largely agree with you btw (about the hamster wheel being corrosive). If I came off as agressive, fyi, I liked the spirit of your post a lot, and I strong-upvoted it.

Reply
My least favorite thing
rank-biserial3y80

Yes, selfish agents want to not get turned into paperclips. But they have other goals too. You can prefer alignment be solved, while not wanting to dedicate your mind, body, and soul to waging a jihad against it. Where can Charlie effectively donate, say, 10% of his salary to best mitigate x-risk? Not MIRI (according to MIRI).

Reply
Load More
List of Blogs
4 years ago
(+12/-17)