Eliezer Yudkowsky's original Sequences have been edited, reordered, and converted into an ebook!

Rationality: From AI to Zombies is now available in PDF, EPUB, and MOBI versions on intelligence.org (link). You can choose your own price to pay for it (minimum $0.00), or buy it for $4.99 from Amazon (link). The contents are:

  • 333 essays from Eliezer's 2006-2009 writings on Overcoming Bias and Less Wrong, including 58 posts that were not originally included in a named sequence.
  • 5 supplemental essays from yudkowsky.net, written between 2003 and 2008.
  • 6 new introductions by me, spaced throughout the book, plus a short preface by Eliezer.

The ebook's release has been timed to coincide with the end of Eliezer's other well-known introduction to rationality, Harry Potter and the Methods of Rationality. The two share many similar themes, and although Rationality: From AI to Zombies is (mostly) nonfiction, it is decidedly unconventional nonfiction, freely drifting in style from cryptic allegory to personal vignette to impassioned manifesto.

The 333 posts have been reorganized into twenty-six sequences, lettered A through Z. In order, these are titled:

  • A — Predictably Wrong
  • B — Fake Beliefs
  • C — Noticing Confusion
  • D — Mysterious Answers
  • E — Overly Convenient Excuses
  • F — Politics and Rationality
  • G — Against Rationalization
  • H — Against Doublethink
  • I — Seeing with Fresh Eyes
  • J — Death Spirals
  • K — Letting Go
  • L — The Simple Math of Evolution
  • M — Fragile Purposes
  • N — A Human's Guide to Words
  • O — Lawful Truth
  • P — Reductionism 101
  • Q — Joy in the Merely Real
  • R — Physicalism 201
  • S — Quantum Physics and Many Worlds
  • T — Science and Rationality
  • U — Fake Preferences
  • V — Value Theory
  • W — Quantified Humanism
  • X — Yudkowsky's Coming of Age
  • Y — Challenging the Difficult
  • Z — The Craft and the Community

Several sequences and posts have been renamed, so you'll need to consult the ebook's table of contents to spot all the correspondences. Four of these sequences (marked in bold) are almost completely new. They were written at the same time as Eliezer's other Overcoming Bias posts, but were never ordered or grouped together. Some of the others (A, C, L, S, V, Y, Z) have been substantially expanded, shrunk, or rearranged, but are still based largely on old content from the Sequences.

One of the most common complaints about the old Sequences was that there was no canonical default order, especially for people who didn't want to read the entire blog archive chronologically. Despite being called "sequences," their structure looked more like a complicated, looping web than like a line. With Rationality: From AI to Zombies, it will still be possible to hop back and forth between different parts of the book, but this will no longer be required for basic comprehension. The contents have been reviewed for consistency and in-context continuity, so that they can genuinely be read in sequence. You can simply read the book as a book.

I have also created a community-edited Glossary for Rationality: From AI to Zombies. You're invited to improve on the definitions and explanations there, and add new ones if you think of any while reading. When we release print versions of the ebook (as a six-volume set), a future version of the Glossary will probably be included.

114

104 comments, sorted by Highlighting new comments since Today at 12:19 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

The cover is incorrect :(

EDIT: If you do not understand this post, read essay 268 from the book!

The code of the shepherds is terrible and stern. One sheep, one pebble, hang the consequences. They have been known to commit fifteen, and twenty-one, and even even, rather than break it.

5Error6yI just bust out laughing in the office at this...and can't share the joke with anybody. Now I want to know if the incorrectness is intentional and if so, what message it's supposed to carry.
4lmm6yIt's a bluff to make us think Yudkowsky cares about things like human happiness rather than what's right. Don't be fooled!
4Paul Crowley6yI had the same thought [http://lesswrong.com/lw/lu8/open_thread_mar_9_mar_15_2015/c4wp]
1Quill_McGee6yThere might be one more stone not visible?
1[anonymous]6y10 would still be incorrect.
2Quill_McGee6yDarn it, and I counted like five times to make sure there really were 10 visible before I said anything. I didn't realize that the stone the middle-top stone was on top of was one stone, not two.
1Transfuturist6yI see nine stones, not ten.
1Paul Crowley6yThree at the back, three at the front, one to one side, one standing up... the question is whether it's standing on one stone or two.

Perhaps this is already discussed elsewhere and I'm failing at search. I'd be amazed if the below wasn't already pointed out.

On rereading this material it strikes me that this text is effectively inaccessible to large portions of the population. When I binged on these posts several years ago, I was just focused on the content for myself. This time, I had the thought to purchase for some others who would benefit from this material. I realized relatively quickly that the purchase of this book would likely fail to accomplish anything for these people, and may make a future attempt more difficult.

I think many of my specific concerns apply to a large percentage of the population.

  • The preface and introductions appear aimed at return readers. The preface is largely a description of 'oops', which means little to a new reader and is likely to trigger a negative halo effect in people who don't yet know what that means. - "I don't know what he's talking about, and he seems to make lots of writing mistakes."
  • There isn't a 'hook'. Talking about balls in urns in the intro seems too abstract for people. The rest of the sequences have more accessible examples, which most people would ne
... (read more)

Thanks for all the comments! This is helpful. I agree 'Biases: An Introduction' needs to function better as a hook. The balls-in-an-urn example was chosen because it's an example Eliezer re-uses a few times later in the Sequences, but I'd love to hear ideas for better examples, or in general a more interesting way to start the book.

'Religion is an obvious example of a false set of doctrines' is so thoroughly baked into the Sequences that I think getting rid of it would require creating an entirely new book. R:AZ won't be as effective for theists, just as it won't be as effective for people who find math, philosophy, or science aversive.

I agree with you about 'boiling the frog', though: it would be nice if the book eased its way into anti-religious examples. I ended up deciding it was more important to quickly reach accessible interesting examples (like the ones in 'Fake Beliefs') than to optimize for broad appeal to theists and agnostics. One idea I've been tossing around, though, is to edit Book I ('Map and Territory') and Book II ('How to Actually Change Your Mind') for future release in such a way that it's possible to read II before I. It will still probably be better for most ... (read more)

I've had similar concerns and I agree with a lot of this.

Get this closer to a 7th grade reading level. This sets a low bar at potential readers who can understand 'blockbuster' books in English. (This might be accomplished purely with the terminology concern/change above)

If we really want to approach a 7th grade reading level, then we had better aim for kindergartners. I remember reading through the book trying to imagine how to bring it down several levels and thinking about just how many words I was taking for granted as a high-IQ adult who has had plenty of time to just passively soak up vocabulary and overviews of highly complex fields. I just don't think we're there yet; I think that's why there are things like SPARC where we're trying it out on highly intelligent high school students who are unusually well-educated for their age.

Change all hyperlinks to footnotes.

To my knowledge this is already a priority.

Is there any ongoing attempt or desire to do a group edit of this into an 'Accessible Rationality'?

I find that there's a wide disparity between LW users in intelligence and education, and I don't know if I see a wiki-like approach converging on anything particula... (read more)

4[anonymous]6yThe point wasn't to aim for 7th graders, but a 7th grade level which would make it generally accessible to busy adults.
2Persol6ySee Mark's post regarding 7th grade; my intention was aimed at adults, who (for whatever reason) seem to like the 7th grade reading level. I'm not sure how to effectively crowd source this without getting volunteers for specific (non-overlapping) tasks and sections. I share your concern with the wiki-method, unless each section has a lead. At work we regularly get 20 people to collaborate on ~100 page proposals, but the same incentives aren't available in this case. Copyediting is time consuming and unexciting; does anyone know of similar crowd sourced efforts? I found a few but most still had paid writers.
4Jayson_Virissimo6y'Accessible Rationality' already exists... in the form of a wildly popular Harry Potter fanfiction.
3ESRogs6yWhat does 1n or 2n-gramming mean? I'm looking at Google Ngrams [https://books.google.com/ngrams/graph?content=bayesian&year_start=1800&year_end=2015&corpus=15&smoothing=1&share=&direct_url=t1%3B%2Cbayesian%3B%2Cc0] , and it's not obvious to me.

1 gramming is checking single words; should identify unfamiliar vocabulary. (Ex: quantifiable)

2 gramming would check pairs of words; should identify uncommon phrases made of common words (ex: probability mass - better examples probably exist)

The 1/2 gram terminology may be made up, but I think I've heard it used before.

2ESRogs6yThanks!
2Kenny6yWhat's the payoff of changing hyperlinks to footnotes? Given all of the other, substantive, issues you raised, that seems unlikely to make any significant difference.
7Persol6yTwo reason: * Frequently having multiple words as hyperlinks in ebooks mean that 'turning the page' may instead change chapters. Maybe it is just a problem with iPhone kindle. * For links that reference forward chapters, what is a new reader to do? They can ignore it and not understand the reference, or they can click, read, and then try to go back... but it's not a very smooth reading experience. Granted, I probably wouldn't have noticed the second issue, if not for the first issue.
1ChristianKl6yI don't think the point of the sequences or the book is to be accessible to everyone. If you want to write 'Accessible Rationality' it likely makes more sense to start from stretch.
5Persol6yAgreed that it may not be the point, but other than what I think are fixable issues, the book contents work well. I don't think starting from scratch would be a large enough improvement to justify the extra time and increased chance of failure. I think the big work is in making the examples accessible, and Eliezer already did this for the -other- negative trigger.

Just a reminder that mistakes/problems/errors can be sent to errata@intelligence.org and we'll try fix them!

I can't mail that address, I get a failure message from Google:

We're writing to let you know that the group you tried to contact (errata) may not exist, or you may not have permission to post messages to the group.

I'll post my feedback here:

Hello,

I got the book "Rationality: From AI to Zombies" via intelligence.org/e-junkie for my Kindle (5th gen, not the paperwhite/touch/fire). So far I've read a dozen pages, but since it will take me a while to get to the end of the book I'll give some feedback right away:

  • The book looks great! Some other ebooks I have don't use page-breaks at the end of a chapter, don't have a Table of Content, have inconsistent font types/sizes etc. The PDF version is very pretty as well.
  • The filename "Rationality.mobi" (AI-Zombie) is the same as "rationality.mobi" (HPMOR)
  • A bunch of inter-book links such as "The Twelve Virtues of Rationality"/"Predictably Wrong"/"Fake Beliefs"/"Noticing Confusion" (all from Biases: An introduction) don't work: On my Kindle I have the option to "Follow link", but when I choose it the page refreshes and I'm still at the same spot.

    Inspectin

... (read more)
8lukeprog6yOops. Should be fixed now.
3alexvermeer6yThanks! D'oh. It's all good in the epub, but something broke (for very dumb reasons) converting the mobi. It's fixed now. If you've already bought the book though Amazon or e-junkie, you'll have to re-download the file to get the fixed one (in a few hours, while Amazon approves the new book). Sorry about that. Not much we can do about this. Amazon is very restrictive in how you can modify the styling of links. It works fine for displays with color, but people with e-ink displays are out of luck. :-( Thanks.
0adamzerner6ySame.

we'll try fix them

I think you meant "try to fix them" :)

You should send that to errata@intelligence.org.

Yay! Now I'm sending this to all of my friends!

My first reaction as well.

But that is easy. What I haven't figured out yet is how to get them to read it.

5Ben Pace6yI've found that the people most interested in reading it are the ones I've already gotten addicted to HPMOR.

One of the most common complaints about the old Sequences was that there was no canonical default order, especially for people who didn't want to read the entire blog archive chronologically.

I was tricked into doing this. Years ago someone posted an ebook claiming to be the Sequences, but was actually just every single Yudkowsky blog post from 2006 to 2010 -_-

It took until noticing that only Yudkowsky's side of the FOOM debate was in there that I realized what had happened

9Paul Crowley6yIt wasn't meant as a trick! Organising them would have been very hard.
7Vulture6yJust as a little bit of a counterpoint, I loved the 2006-2010 ebook and was never particularly bothered by the length. I read the whole thing at least twice through, I think, and have occasionally used it to look up posts and so on. The format just worked really well for me. This may be because I am an unusually fast reader, or because I was young and had nothing else to do. But it certainly isn't totally useless :P
7bramflakes6yOh, I didn't mean to imply I didn't like it! It was a welcome companion for hundreds of long school bus journeys.

Good work guys!

This might be the excuse I need to finally go through the complete sequences as opposed to relying on cherry-picking posts whenever I encounter a reference I don't already know.

I am impressed. The production quality on this is excellent, and the new introduction by Rob Bensinger is approachable for new readers. I will definitely be recommending this over the version on this site.

[-][anonymous]6y 9

I paid $0 because I'd rather not pay transaction fees on a donation to charity. You can donate to MIRI directly here:

https://intelligence.org/donate/

And CFAR here:

http://rationality.org/donate/

4Malo6ySee my comment here [http://lesswrong.com/lw/lvb/rationality_from_ai_to_zombies/c58c] about this.
2[anonymous]6yI used and prefer Bitcoin, which wasn't an option for the eBook and which carries smaller fees.

Excellent, thank you! Any update on when the real book will be available for purchase for those of us who don't do ebooks?

1Ivan_Tishchenko5yI second this question! I want to have this book in flesh, staying on my bookshelf.

The zip file has some extra Apple metadata files included. Nothing too revealing, just dropbox bits.

Can I know to who and where the money for the book goes?

From Amazon, 30% goes to Amazon and 70% goes to MIRI.

From e-junkie (the pay-what-you-want option): 100% goes to MIRI, minus PayPal transaction fees (a few %).

9DanielLC6yCouldn't you pay $0.00, send the money to MIRI, and avoid transaction fees?

Yeah. Main reason to do it this way is fear of trivial inconveniences.

5Malo6yDepending on how you sent money to MIRI, we'd incur transaction fees anyway (donating through PayPal using a PayPal account or CC). ACH donations have lower fees, and checks don't have any, but both of those take staff time to process, so unless the donation was say $50 or more, it probably wouldn't be worth it.
1[anonymous]6yWhat about Bitcoin?
8Malo6yNo fees, but also takes some extra staff time (additional bookkeeping/accounting work is involved), so there is some cost to it. If we got more BTC donations it would reduce the time cost per donation, due to effects of batching, but as it stands now, they are usually processed (record added to our donor database and accounting software) on an individual basis. One thing that takes a significant amount of time is when someone mis-pays a Coinbase invoice (sends a different amount of BTC then they indicated on the Coinbase form on our site). Coinbase treats these payments in a different way that ends up requiring more time to process on our end. All that being said we like having the BTC donation option, and it always makes me happy to see one come in. So if making contributions via BTC is your preference, I'm all for it :)
1ike6yThey use coinbase, so according to this [https://support.coinbase.com/customer/portal/articles/1277919-what-fees-does-coinbase-charge-for-merchant-processing-] it's free up to $1 million.
1[anonymous]6yIt should be free, period. Coinbase doesn't charge fees for registered non-for-profits.
2alexvermeer6yYup, but those are convenient distribution platforms.
3adamzerner6yPerhaps this should be noted in the main article. I was thinking about buying it through Amazon until I saw this!

For reasons, I suggest that Bayesian Judo doesn't make EY look good to people who aren't already cheering for his team, and maybe it wasn't wise to include it.

More generally, the book feels a bit... neutered. Things like, for example, changing "if you go ahead and mess around with Wulky's teenage daughter" to "if you go ahead and insult Wulky". The first is concrete, evocative, and therefore strong, while the latter is fuzzy and weak. Though my impression may be skewed just because I remember the original examples so well.

3Mirzhan_Irkegulov5yMy opinion: remove Bayesian Judo and add Whining-Based Communities [http://lesswrong.com/lw/8t/whiningbased_communities/]. Seriously, Whining-Based Communities is the most powerful article I've ever read on LW, it symbolizes what rationality is about most of all. The point of rationality is achieving your goals despite cognitive biases, signaling, self-delusion, mysterious answers etc. It's very easy to brainwash yourself into thinking that you are “doing a good job”. It's very hard to put extra effort into actually doing what is the most effective, because it might go against your habits, self-image, intuitions, convictions etc.

I am thinking of recommending this to people, all of whom are unlikely to pay. Is having people acquire this for $0 who would otherwise not have read it beneficial or harmful to MIRI? (If the answer is "harmful because of paying for people to download it", I can email it to my friends with a payment link instead of directing them to your website.)

9Malo6yDefinitely beneficial, there is no cost worth considering when it comes to the next marginal person getting the book through our site, even if their selection is $0. So don't worry about directing them there.

Congratulations, well done!

Side note: the "Glossary" link seems to be broken.

5Rob Bensinger6yShould be working now. I accidentally made it an internal link.

With SumatraPDF 3.0 on Windows 8.1 x64, the links in the PDF version do not show up. With Adobe Reader 11 on Windows 7 x86, they look fine. On the other hand, SumatraPDF can also handle the MOBI and EPUB versions.

8bramflakes6yI'm getting problems too. The contents pages look like this [http://i.imgur.com/qxLRC4U.png], for example.
2jrincayc3yI have used Lulu to print the book, instructions are at: https://github.com/jrincayc/rationality-ai-zombies [https://github.com/jrincayc/rationality-ai-zombies] Or you could print it somewhere else that allows you to print a 650 page 8.5 by 11 inch book. (If you try it with a different place, let me know) I have read through the entire printed version and fixed all the formatting issues that I found in the beta7 release in the new beta8 release.
0jrincayc4yI have relinked the footnotes. It is now reasonably editable. I've put up pdfs at https://github.com/jrincayc/rationality-ai-zombies/releases [https://github.com/jrincayc/rationality-ai-zombies/releases]
0jrincayc5yThere is still a lot of work to do before I consider it done, but it is more or less useable for some purposes. I printed off a copy for myself from Lulu for about $12. Here is the two column version that can be printed out as a single volume: http://jjc.freeshell.org/rationality-ai-zombies/rationality_from_ai_to_zombies_two_column_beta2.pdf [http://jjc.freeshell.org/rationality-ai-zombies/rationality_from_ai_to_zombies_two_column_beta2.pdf]

Hi, and thanks for the awesome job! Will you keep a public record of changes you make to the book? I'm coordinating a translation effort, and that would be important to keep it in sync if you change the actual text, not just fix spelling and hyperlinking errors.

Edit: Our translation effort is for Portuguese only, and can be found at http://racionalidade.com.br/wiki .

4Rob Bensinger6yYes, we'll keep a public record of content changes, or at least a private record that we'd be happy to share with people doing things like translation projects.
1hydkyll6yHow is that translation coming along? I could help with German.
0Gust6yWe're translating to Brazilian Protuguese only, since that's our native language.

I liked Robby's introduction to the book overall, but I find it somewhat ironic that right after the prologue where Eliezer mentions that one of his biggest mistakes in writing the Sequences was focusing on abstract philosophical problems that are removed from people's daily problems, the introduction begins with

Imagine reaching into an urn that contains seventy white balls and thirty red ones, and plucking out ten mystery balls.

The first (though not necessarily best) example of how to rewrite this in less abstract form that comes to mind would be some... (read more)

5Rob Bensinger6yPart of the idea behind the introduction is to replace an early series of posts: "Statistical Bias" [http://lesswrong.com/lw/ha/statistical_bias], "Inductive Bias" [http://lesswrong.com/lw/hg/inductive_bias], and Priors as Mathematical Objects [http://lesswrong.com/lw/hk/priors_as_mathematical_objects]. These get alluded to various times later in the sequences, and the posts 'An Especially Elegant Evolutionary Psychology Project', 'Where Recursive Justification Hits Bottom', and 'No Universally Compelling Arguments' all call back to the urn example. That said, I do think a more interesting example (whether or not it's more 'ordinary' and everyday) would be a better note to start the book on. Do feel free to send stylistic or substantive change ideas to errata@intelligence.org, not just spelling errors.
4Gram_Stone6yThis came to mind for me as well. This, from Burdensome Details, popped out at me: "Moreover, they would need to add absurdities—where the absurdity is the log probability, so you can add it—rather than averaging them." All this does for me is pattern-match to a Wikipedia article I once read about the concept of entropy in information theory; I don't really know what it means in any precise sense or why it might be true. And the essay even seems to stand on its own without that part. I've come to ignore my fear of not understanding things unless I don't understand pretty much everything I'm reading, but I think a lot of people would get scared that they didn't know enough to read the book and just stop reading.
6Kaj_Sotala6yCome to think of it, we could collect proposed rewrites / deletions to some wiki page: this seems suitable for a communal effort. The "deletions" wouldn't actually need to be literal deletions, they could just be moved into a footnote. E.g. in the Burdensome Details article, a footnote saying something like "technically, you can measure probabilities by logarithms and..."
5Rob Bensinger6yI like the idea of turning a lot of these jargony asides, especially early in the book, into footnotes. We'll be needing to make heavier use of footnotes anyway in order to explicitly direct people to other parts of the series in places where there will no longer be a clickable link. (Though we won't do this for most clickable links, just for the especially interesting / important ones.) You're welcome to use a wiki page to list suggested changes, or a Google Doc; or just send a bunch of e-mails to errata@intelligence.org with ideas.

Awesome! How large is it altogether (in words)?

9alexvermeer6yApproximately 600,000 words!

Which is roughly the length of War and Peace or Atlas Shrugged.

7Transfuturist6yAh, so about as large as it takes for a fanfic to be good. :P
[-][anonymous]6y 5

Don't have paypal or credit card or bitcoins or similar stuff, 0 price for now, I will look into donating from my Maestro debit card or maybe a direct transfer although international transfer rates may make that not worth the while. That and cash are the only methods I use - I rarely need anything I cannot buy with them. (I use gift cards purchased in shops for steam and google play.) I am thinking about purchasing some bitcoins for € for such donations purpose, if anyone can recommend a safe and debit card (or sofort.com) compatible service?

0[anonymous]6yIf you set the price to $0.00 then you don't need to give any payment information.

That's awesome!

7MrMind6yBoth links don't work though: you have lesswrong.com/ prefixing every correct address.
3Rob Bensinger6yFixed! Thanks.

Do people think there is value in making an audio book from this?

I was thinking it would be possible to do in a similar process to the HPMOR audiobook with people contributing different chapters. If there is interest in doing this and if it is permitted to be done then I will happily volunteer to coordinate the effort. If this idea does have support then given the discussion below about how the book could be improved, would it make more sense to postpone an audiobook to allow for sensible changes, or is that an unnecessary delay in search of unreachable perfection?

5Vaniver6yYes; one is being made [http://castify.co/channels/53-rationality-from-ai-to-zombies-volume-1-beta] by Castify.
[-][anonymous]5y 2

I just finished listening to the Audiobook version of Rationality: From AI to Zombie. Lots of thanks to Yudkowsky and everyone else that was involved in making this book and the audio book. I do not know who the reader of the audio book is, but thanks all the same.

I am writing this comment as my way of prizing this book. I will try to summarize what I have personalty learned from it, in the hope that someone who was involved, will read this post and fell some pride in having helped me in my self improvement. But I am also writing this comment because I ju... (read more)

1Linda Linsefors8moI'm leaving this comment so that I can find my way back here in the future.
[-][anonymous]6y 2

Does the book (especially the printed version) have training problems after sections? (I don't have it, sorry if the question is redundant).

1[anonymous]6yIt does not.
0[anonymous]6yMaybe it should, for people who won't discuss things online for some reason.

Is a printed six-volume set still being worked on?

3Raemon3yThere are printed versions of book 2, that are given out sometimes at CFAR.
1jrincayc3yHow to actually change your mind (book 2) is definitely a great section of Rationality: From AI to Zombies.
2Elo3yNot that I know of.

Might be worth including the Amazon.co.uk and other store links.

[-][anonymous]5y 0

A friend of mine is interested in reading this book, but would prefer a printed copy. Is there any chance that this book will be published any time soon?

0jrincayc4yI have used the two column version: https://github.com/jrincayc/rationality-ai-zombies/releases/download/beta3/rationality_from_ai_to_zombies_2c.pdf [https://github.com/jrincayc/rationality-ai-zombies/releases/download/beta3/rationality_from_ai_to_zombies_2c.pdf] with https://www.lulu.com/ [https://www.lulu.com/] to make a printed version for myself. (Update: beta3 has quite a few problems that have been fixed in newer versions, so grab a new release if you are printing it: https://github.com/jrincayc/rationality-ai-zombies/releases [https://github.com/jrincayc/rationality-ai-zombies/releases] ) Note that there are problems with the that pdf, so it isn't perfect, but it might work. The regular PDF is too long to print as a single book.

Is it possible to make the book available on Google Play Books? What might be reasons not to include the book there?

[-][anonymous]6y 0

Is there anything on procrastination? I'm tempted to buy this bookinstead cause the dude has an alright podcast too. I don't listen to it anymore cause it's boring and not consistently novel information but yeah.

When I feel like this I don't want to read chapters that are complex sounding like Rationality and Politics and Death Spirals that without having read the sequences, don't mean shit to me and could equally appear in some random Trotskyist propoganda from the weird organisation down the road.

When are these pop-rationality books gonna be replaced by... (read more)

[This comment is no longer endorsed by its author]Reply

Sorry for my problem.I tried 15 times downloading,only once started and stopped at 1.5M/30.6M. Others can't even get to track. I wish to use another source ,or some kind friends could send the pdf to 513493106@qq.com?DEEPLY BOW for your help!

[This comment is no longer endorsed by its author]Reply