Wiki Contributions


I’ve been reading the hardcover SSC collection in the mornings, as a way of avoiding getting caught up in internet distractions first thing when I get up. I’d read many of Scott Alexander’s posts before, but nowhere near everything posted; and I hadn’t before made any attempt to dive the archives to “catch up” to the seeming majority of rationalists who have read everything Scott Alexander has ever written.

Just a note that these are based on the SlateStarCodexAbridged edition of SSC:

I still think that this problem is intractable so long as people refuse to define 'rationality' beyond 'winning'.

I, in general, try to avoid using the frame of 'rationality' as much as possible precisely because of this intractability. If you talk about things like existential risk, it's clearer what you should know to work on that.

This talk is required reading for designing a tag system:

I can also recommend the book The Intellectual Foundation Of Information Organization by Elaine Svenonius.

I don’t see how the law of “people are obligated to respond to all requests for clarifications”, or even “people always have to define their terms in way that is understood by everyone participating” is somehow an iron law of communication. If anything, it is not an attribute that any existing successful engine of historical intellectual progress has had. Science has no such norms, and if anything strongly pushes in the opposite direction, with inquiries being completely non-public, and requests for clarification being practically impossible in public venues like journals and textbooks. Really very few venues have a norm of that type (and I would argue neither has historical LessWrong), even many that to me strike me as having produced large volumes of valuable writing and conceptual clarification.

Some thoughts.

I don’t see how the law of “people are obligated to respond to all requests for clarifications”

I feel like Said is either expressing himself poorly here, or being unreasonable. After all, the logical conclusion of this would be that people can DDoS an author by spamming them with bad faith requests for clarification.

However I do think there is a law in this vein, something more subtle, more nuanced, a lot harder to define. And its statement is something like:

In order for a space to have good epistemics, here defined as something like "keep out woo, charlatans, cranks, etc", that space must have certain norms around discourse. These norms can be formulated many different ways, but at their core they insist that authors have an obligation to respond to questions which have standing and warrant.

Standing means that:

  • The speaker can be reasonably assumed not to be bad faith
  • Is an abstract "member of the community"
  • It is generally agreed on by the audience that this persons input is in some way valuable

There are multiple ways to establish standing. The most obvious is to be well respected, so that when you say something people have the prior that it is important. Another way to establish standing is to write your comment or question excellently, as a costly signal that this is not low-effort critique or Paul Graham's infamous "middlebrow dismissal".

Warrant means that:

  • There are either commonly assumed or clearly articulated reasons for asking this question. We are not privileging the hypothesis without justification.

  • These reasons are more or less accepted by the audience.

Questions & comments lacking either standing or warrant can be dismissed, in fact the author does not even have to respond to them. In practice the determination of standing and warrant is made by the author, unless something seems worthy enough that their ignoring it is conspicuous.

I think you would be hard pressed to argue to me in seriousness that academics do not claim to have norms that peoples beliefs are open to challenge from anyone who has standing and warrant. I would argue that the historical LessWrong absolutely had implicit norms of this type. Moreover, EY himself has written about insufficient obligation to respond as a major bug in how we do intellectual communication.

I have this intuitive notion that:

I do think the relevant question is whether your comments are being perceived as demanding in a similar way. From what I can tell, the answer is yes, in a somewhat lesser magnitude, but still a quite high level, enough for many people to independently complain to me about your comments, and express explicit frustration towards me, and tell me that your comments are one of the major reasons they are not contributing to LessWrong.

I agree that you are not as bizarrely demanding as curi was, but you do usually demand quite a lot.

When people talk about "demanding" in this sense what they're actually doing is a very low level reasoning mistake EY talks about in his post on Security Mindset:

AMBER: That sounds a little extreme.

CORAL: History shows that reality has not cared what you consider “extreme” in this regard, and that is why your Wi-Fi-enabled lightbulb is part of a Russian botnet.

AMBER: Look, I understand that you want to get all the fiddly tiny bits of the system exactly right. I like tidy neat things too. But let’s be reasonable; we can’t always get everything we want in life.

CORAL: You think you’re negotiating with me, but you’re really negotiating with Murphy’s Law. I’m afraid that Mr. Murphy has historically been quite unreasonable in his demands, and rather unforgiving of those who refuse to meet them. I’m not advocating a policy to you, just telling you what happens if you don’t follow that policy. Maybe you think it’s not particularly bad if your lightbulb is doing denial-of-service attacks on a mattress store in Estonia. But if you do want a system to be secure, you need to do certain things, and that part is more of a law of nature than a negotiable demand.

Where, there is a certain level of detail and effort that simply has to go into describing concepts if you want to do so clearly and reliably. There are inviolable, non-negotiable laws of communication. We may not be able to precisely define them but that doesn't mean they don't exist. We certainly know some of their theoretical aspects thanks to scholars like Shannon.

I think a lot of what Said does is insist that people put in that effort, that The Law be followed so to speak. Unfortunately there is no intrinsic punishment for not following the law besides being misunderstood (which isn't really so costly to the speaker, and hard for them to detect in a blog format). That means they commit a map/territory error analogous to the Rust programmer who insists Rust makes things much harder than C does. There's probably some truth to this, but a lot of the thing is just that Rust forces the programmer to write code at the level of difficulty it would have if C didn't let you get away with things being broken.

Before and After

At the start of the decade I was 13, I'm now 23.


Before: I was a recovering conspiracy theorist. I'd figured out on my own that my beliefs should be able to predict the future, and started insisting they do. I wrote down things I expected to happen by a certain time in a giant list, and went back to record the outcome. I wanted to be a video game developer, but didn't know how to start.

A 13 year old boy sits on a swingset in his backyard, listening to Owl City[0] and Lemon Demon[1] as frosty dew melts off green grass in the morning sun. He's daydreaming about the end of the world and his impending death. There is no god and nobody is coming to save him.

After: The oldest copy of Harry Potter and The Methods Of Rationality I can find on my computer is dated January 1st of 2011 at 4:13AM. Now in 2019 I have read many books about phreakers, hackers, makers, computer wizards, rationalists, stats nerds, and the subjects that interest them. My enthusiastic anarchism has given way to a grim realpolitik that still values freedom but understands there are no easy solutions and everything runs on incentives. I call myself an extropian because 'singularitan' sounds too awkward.

A young man is washing the main board of an original Xbox with vinegar. His work bench has an overhead light, it's the brightest thing in the room and everything else looks dim by comparison. The intent of the Xbox was that its data be confined to its aging hardware. He remembers taking Adderall that day, he has perfect focus as he washes away the corrosion left behind by the clock capacitor. During this task he reflects on the decay inherent in all things. The data in his brain is also confined to its aging hardware, and as it ages it corrodes. In his reflections he is no different from this Xbox, peering into his magnifying glass at an eroded trace on the board he sees the infinite void ahead of him. He imagines himself to be washing the body of an embryonic god.


Before: I was probably most skilled at playing Halo, and only so good at that. I found the idea of writing a 2-3 page essay an imposition. It was around this time that I first installed Linux, I could not program.

After: I am now probably most skilled at writing, but only so good at that. ;) I can write a 12 page lab report in a weekend. I'm skilled enough at programming to write a compiler.

Career & Lifestyle

Career is just starting, though I did make a point of trying to do Real Things during school. Lifestyle is more or less unchanged, a lot of time spent indoors on nerdy things.


Oops and Duh

  • The curse of dimensionality makes it easy to get confused about peoples relative ability to each other. It is however a map/territory error to believe that your confusion means there is no sense in which some people are massively more competent than others. Duh.
  • People are only a little altruistic, and only value 'purity' in products a little for its own sake. Distributed systems will generally lose to centralized systems which are more convenient, because they more or less compete on the same metrics. If you want people to use them then, you need to work a lot harder. Oops.
  • The reason why you got diagnosed with ADD as a kid isn't because it was a fad, it's because you had every symptom including the emotional regulation issues[2] which are part of the disorder but not in the DSM. Incidentally, you have to fight so hard to do schoolwork because you have untreated ADD. Oops.
  • Instead of trying to write your own programs while you learn to program, you'd be better off trying to clone other programs that already exist. This frees you from having to do any of the design while you struggle with programming, gives you an objective measure of progress, ensures you are capable of doing useful work, and has other benefits as well. I wasted lots of time by not knowing this. Duh.


Probably the biggest habit I broke was playing video games. I rarely play video games these days, and go out of my way to avoid television and fiction stories as well. Life is too short to waste it on transient hallucinations, the real world is much more interesting.

I think the biggest habit I started was talking to people, a lot. With the Internet and smart phones you can basically always be in a conversation if you want to. I started making a point of always talking to people about my ideas, getting feedback, practicing persuasion, etc.


I spent 7 of the last 10 years in school, and I hate school. Realistically then if I'm being honest with myself, this was not a fun decade for me. I probably had more bad experiences than good, but the good experiences were good enough to balance it out.

Maybe I'll come back to this section later and edit in more, maybe I won't. :)

Worth Noting

  • I'm overall satisfied with this decade. I could have done more if I was playing perfectly, but I feel pretty good about where I am right now.

  • My past self should really get their ADD treated before they spend 4 years of high school struggling against it. He should also stop focusing so much on program 'correctness' or whatever that he's not even qualified to understand, and just focus on replicating the computer programs he interacts with. It's okay to use a web framework. The reason they're not intellectually satisfied with the web is that all the knowledge they want is on Google Scholar and buried in academic PDF's and print books. I think my past self would probably be pretty skeptical of a lot of this, and then figure out it's true as they're not making progress fast enough.

  • I'll probably remember the 2010's for: Anonymous, Wikileaks, Machine Learning, frivolous smartphone driven social media apps, memes, the Lain-ification of the Internet with the alt-right & Trump (etc), economic anxiety and rent seeking, the death of journalism.

[0]: This Is The Future by Owl City

[1]: Sundial by Lemon Demon

[2]: I grew up and no longer have emotional regulation issues.

The CFAR branch of rationality is heavily inspired by General Semantics, with its focus on training your intuitive reactions, evaluation, the ways in which we're biased by language, etc. Eliezer Yudkowsky mentions that he was influenced by The World of Null-A, a science fiction novel about a world where General Semantics has taken over as the dominant philosophy of society.

Question: Considering the similarity to what Alfred Korzybski was trying to do with General Semantics to the workshop and consulting model of CFAR, are you aware of a good analysis of how General Semantics failed? If so, has this informed your strategic approach with CFAR at all?

Does CFAR have a research agenda? If so, is it published anywhere?

By looking in-depth at individual case studies, advances in cogsci research, and the data and insights from our thousand-plus workshop alumni, we’re slowly building a robust set of tools for truth-seeking, introspection, self-improvement, and navigating intellectual disagreement—and we’re turning that toolkit on itself with each iteration, to try to catch our own flawed assumptions and uncover our own blindspots and mistakes.

This is taken from the about page on your website (emphasis mine). I also took a look at this list of resources and notice I'm still curious:

Question: What literature (academic or otherwise) do you draw on the most often for putting together CFAR's curriculum? For example, I remember being told that the concept of TAP's was taken from some psychology literature, but searching Google scholar didn't yield anything interesting.

Load More