LESSWRONG
LW

Rationality A-Z (discussion & meta)SummariesRationality
Frontpage

101

Rationality: Abridged

by Quaerendo
6th Jan 2018
1 min read
25

101

Rationality A-Z (discussion & meta)SummariesRationality
Frontpage

101

Rationality: Abridged
17habryka
16AABoyles
4Quaerendo
3AABoyles
2sone3d
1AABoyles
1sone3d
1AABoyles
7Raemon
4Quaerendo
2Adam Zerner
6query
4Quaerendo
5query
6Said Achmiz
4Yoav Ravid
4sone3d
3AABoyles
4AABoyles
3Aorou
2Quaerendo
2gjm
1Quaerendo
1waveman
1Quaerendo
New Comment
25 comments, sorted by
top scoring
Click to highlight new comments since: Today at 10:22 AM
[-]habryka8y170

Nice!

I haven't looked at it in detail yet, but it seems like this should also be available as a sequence on the new LessWrong (we are still finalizing the sequences features, but you can see a bunch of example in The Library).

We could just import the HTML from your website without much hassle and import them as a series of LW posts.

Reply
[-]AABoyles8y160

I have converted Rationality Abridged to EPUB and MOBI formats. The code to accomplish this is stored in this repository.

Reply
[-]Quaerendo8y40

Thanks a lot for doing this!

Reply
[-]AABoyles8y30

My pleasure!

Reply
[-]sone3d8y20

The epub version doesn’t work. The file contain errors.

Reply
[-]AABoyles8y10

Thanks for letting me know. I use [Calibre](https://calibre-ebook.com/about) to test the files, and it opens the file without complaint. What are you using (and on what platform) to read it?

Reply
[-]sone3d8y10

iBooks and Marvin apps (iOS).

Reply
[-]AABoyles8y10

Thank you! I don't have a good way to test Apple products (so the fix won't be quick), but I'll look into it.

Reply
[-]Raemon8y70

Massive props. (For your first post, no less?)

I see some things I think could be tweaked a bit - mostly if the form of breaking paragraphs down into somewhat more digestible chunks (each summary feels slightly wall-of-text-y to me). However, overall my main takeaway is that this is great. :)

Reply
[-]Quaerendo8y40

Thanks for the kind words :) I agree with what you're saying about the 'wall-of-text-iness', especially on the web version; so I'm going to add some white space.

Reply
[-]Adam Zerner8y20
(For your first post, no less?)

Yeah, seriously!!! You've got my vote for the First Post Of The Year Award.

Reply
[-]query8y60

This is completely awesome, thanks for doing this. This is something I can imagine actually sending to semi-interested friends.

Direct messaging seems to be wonky at the moment, so I'll put a suggested correction here: for 2.4, Aumann's Agreement Theorem does not show that if two people disagree, at least one of them is doing something wrong. From wikipedia: " if two people are genuine Bayesian rationalists with common priors, and if they each have common knowledge of their individual posterior probabilities, then their posteriors must be equal. " This could fail at multiple steps, off the top of my head:

  1. The humans might not be (mathematically pure) Bayesian rationalists (and in fact they're not.)
  2. The humans might not have common priors (even if they satisfied 1.)
  3. The humans might not have common knowledge of their posterior probabilities; a human saying words is a signal, not direct knowledge, so them telling you their posterior probabilities may not do the trick (and they might not know them.)

You could say failing to satisfy 1-3 means that at least one of them is "doing something wrong", but I think it's a misleading stretch -- failing to be normatively matched up to an arbitrary unobtainable mathematical structure is not what we usually call wrong. It stuck out to me as something that would put off readers with a bullshit detector, so I think it'd be worth fixing.

Reply
[-]Quaerendo8y40

Thanks for the feedback.

Here's the quote from the original article:

I said, "So if I make an Artificial Intelligence that, without being deliberately preprogrammed with any sort of script, starts talking about an emotional life that sounds like ours, that means your religion is wrong."
He said, "Well, um, I guess we may have to agree to disagree on this."
I said: "No, we can't, actually. There's a theorem of rationality called Aumann's Agreement Theorem which shows that no two rationalists can agree to disagree. If two people disagree with each other, at least one of them must be doing something wrong."

One could discuss whether Eliezer was right to appeal to AAT in a conversation like this, given that neither he nor his conversational partner are perfect Bayesians. I don't think it's entirely unfair to say that humans are flawed to the extent that we fail to live up to the ideal Bayesian standard (even if such a standard is unobtainable), so it's not clear to me why it would be misleading to say that if two people have common knowledge of a disagreement, at least one (or both) of them are "doing something wrong".

Nonetheless, I agree that it would be an improvement to at least be more clear about what Aumann's Agreement Theorem actually says. So I will amend that part of the text.

Reply
[-]query8y50

Yeah; it's not open/shut. I guess I'd say in the current phrasing, the "but Aumann’s Agreement Theorem shows that if two people disagree, at least one is doing something wrong." is suggesting implications but not actually saying anything interesting -- at least one of them is doing something wrong by this standard whether or not they agree. I think adding some more context to make people less suspicious they're getting Eulered (http://slatestarcodex.com/2014/08/10/getting-eulered/) would be good.

I think this flaw is basically in the original article as well, though, so it's also a struggle between accurately representing the source and adding editorial correction.

Nitpicks aside, want to say again that this is really great; thank you!

Reply
[-]Said Achmiz8y60

A worthy project! Very nice.

It seems like this could benefit from webification, a la https://www.readthesequences.com (including hyperlinking of glossary terms, navigation between sections, perhaps linking to the full versions, etc.—all the amenities of web-based hypertext). If this idea interests you, let me know.

Reply
[-]Yoav Ravid6y40

Just discovered this through the archive feature, this is awesome!

I think it should be linked in more places, it's a really useful resource.

Two Years late but, thank you for making this!

Reply
[-]sone3d8y40

Could someone convert it to epub, please?

Nice work, Quaerendo!

Reply
[-]AABoyles8y30

I'm on it!

Reply
[-]AABoyles8y40

Done!

Reply
[-]Aorou8y30

Ideal format for beginning rationalists, thank you so much for that. I am reading it every day, going to full articles when wanting some more depth. It's also helped me "recruit" new rationalists among my friends. I think that this work may have wide and long-lasting effects.

It would be extra-nice, and I don't have the skills to do it myself, to have the links go to this LW - 2.0. Maybe you have reasons against it that I haven't considered?

Reply
[-]Quaerendo8y20

Thanks, I'm glad you found it useful!

The reason I didn't link to LW 2.0 is because it's still officially in beta, and I assumed that the URL (lesserwrong.com)will eventually change back to lesswrong.com (but perhaps I'm mistaken about this; I'm not entirely sure what the plan is). Besides, the old LW site links to LW 2.0 on the frontpage.

Reply
[-]gjm8y20

Entirely irrelevantly, given your blog's domain name I take it the missing half of your username is "invenietis"? :-)

Reply
[-]Quaerendo8y10

Indeed ; )

Reply
[-]waveman8y10

I just finished reading it. I find it a very useful summary and that is a hard thing to do, I know, and takes a lot of work. Thank you.

I noticed a typo

"The exact same gamble, framed differently, causes circular preferences.

People prefer certainty, and they refuse to trade off scared values (e.g. life) for unsacred ones.

But our moral preferences shouldn’t be circular."

scared => sacred

Reply
[-]Quaerendo8y10

Thanks for pointing it out. I've fixed it and updated the link.

Reply
Moderation Log
More from Quaerendo
View more
Curated and popular this week
25Comments

This was originally planned for release around Christmas, but our old friend Mr. Planning Fallacy said no. The best time to plant an oak tree is twenty years ago; the second-best time is today.

I present to you: Rationality Abridged -- a 120-page nearly 50,000-word summary of "Rationality: From AI to Zombies". Yes, it's almost a short book. But it is also true that it's less than 1/10th the length of the original. That should give you some perspective on how massively long R:AZ actually is.

As I note in the Preface, part of what motivated me to write this was the fact that the existing summaries out there (like the ones on the LW Wiki, or the Whirlwind Tour) are too short, are incomplete (e.g. not summarizing the "interludes"), and lack illustrations or a glossary. As such, they are mainly useful for those who have already read the articles to quickly glance at what it was about so they could refresh their memory. My aim was to serve that same purpose, while also being somewhat more detailed/extensive and including more examples from the articles in the summaries, so that they could also be used by newcomers to the rationality community to understand the key points. Thus, it is essentially a heavily abridged version of R:AZ.

Here is the link to the document. It is a PDF file (size 2.80MB), although if someone wants to convert it to .epub or .mobi format and share it here, you're welcome to.

There is also a text copy at my brand new blog: perpetualcanon.blogspot.com/p/rationality.html

I hope you enjoy it.

(By the way, this is my first post. I've been lurking around for a while.)

Mentioned in
23Rationality Feed: Last Month's Best Posts