My composition teacher in college told me that in some pottery schools, the teacher holds up your pot, examines it, comments on it, and then smashes it on the floor. They do this for your first 100 pots.

In that spirit, this post's epistemic status is SMASH THIS POT.

Eliezer’s fundamental genius is that he is a profoundly original storyteller. From fables to fanfic, childhood recollections to water-cooler anecdotes, he has a keen eye for the firings and misfirings of the brain and how they operate in groups, conversations, and within himself. He gestures at scientific literature, references game theory, but narrative is the motor of his argument. His posts are replete with his ideals, visions, fears, and sheer force of personality, imbuing the world with his indelible stamp.

Stories are the motor of Eliezer's rationality, and they are at the same time the root cause of our insanity. You wouldn't cut off your right arm to cure your clumsiness, and so you shoudn't try to put your own story-telling capacity in a straight jacket. The only way to learn its virtues and capacity to lead you astray is to practice it.

I'm no master storyteller, but I motivate some of my music students by telling them tales in which they can go on adventures and earn rewards in exchange for doing what I ask of them. The locations, the events, the moral of the story is all for me. Through practice, I learn how to shape the story in ways that they will respond to. It comes more easily than you might think, but you need feedback and a willingness to take risks.

Somewhere in the effort to systematize rationality, I think the deeply personal nature of this effort got lost. We know self-help is idiosyncratic, and yet...

Maybe your rationality has a different motor that Eliezer's. And perhaps we as a community have moved on. So much of LessWrong's early writings are steeped in scientific findings that died in replication. I read Thinking Fast and Slow on a family vacation in Italy, in between visits to cathedrals in the capitals of ancient superpowers. I was already full of the mix of frustration and idealism that predisposes one to Effective Altruism long before I ever heard the term. Exposure to these ideas can be a catalyst when the reactants are already present, but what keeps the reaction self-sustaining?

To be honest, I never liked Eliezer's writings. The Biblical language, the cutesie fables, the self-aggrandizement. Then I starting asking what made him able to play such a huge role in focalizing a movement for rationality, a movement I think is both personally and globally important.

I started to see the bravery that is required to not only exposure your life, but draw general lessons from it, put forth in such a compelling way that others find it speaking to them. That's a form of identity politics, sure, but I'm a big believer in identity politics. Maybe half our insanity isn't from a failure to look out, but a failure to look within.

My fear about systematized rationality is that it supplies us with methods and expected conclusions, and is thus vulnerable to Goodhart's Law. I'm still a believer in the kind of art that undermines your confidence in the answers it provides. Self-defeating propaganda. The propaganda of doubt.

Still, there are artists who escape the systematizations of the academy, and artists who transcend it. Debussy was the former, Beethoven the latter. Or if you like musical theater, Lin-Manuel Miranda and Stephen Sondheim. Systematized rationality might produce its own genius practitioner.

Eliezer's drive for rationality manifests as much in his palpable distaste, even fear, of his own irrationality as in a sense of desire or need for clearer thought. It's the primordial disgust reaction, the childhood instinct that this seems bad, and I'd better avoid it.

Stories can have that effect. They can become impregnated with a tremendous amount of real-world experience. Like a painting that only becomes more beautiful as the viewer applies their own brushstrokes. You can only really add to a story when you learn to tell your own. Don't shy away from it. Lean head on into it. Storytelling is a political and potentially irrational act, but that's only toxic when there's just one story and it is mandatory. Eliezer put forth new stories, ones we hadn't heard before, and interpreted them in a new light. One way to fight poison is by dilution.

When I was in middle school, my social studies teacher had us simulate ancient Athens. She imposed a rule was that the women couldn't speak in our class debates. Only the boys could vote. What would we do?

I stood up immediately and gave a speech, boldly speaking out against this unjust system and calling on my classmates to vote for women's enfranchisement. The class was unanimously in favor, and that was the end of that.

My teacher found a way to draw us into a new story, to make it real enough for us. And I had an experience of my words having power. For the project could have gone another way. Maybe another boy would have stood and spoken in favor of humiliating the girls in the class. Perhaps they eventually would have rebelled against us.

There were more staid ways to present that lesson. A page out of the textbook, a written assignment. Despite having superficially the same content, I'd have forgotten them, but I never forgot that. A different teacher from 5th grade used to threaten us with textbook reading as a punishment when we got distracted from his genuinely much more interesting presentations.  The medium is the message.

If you wanted to get across the message of rationality through story, what would you disclose? Don't be shy. Just start telling the story that springs to mind. You'll find the reason for it as you go along if you practice telling it enough.

New to LessWrong?

New Comment
20 comments, sorted by Click to highlight new comments since: Today at 4:59 PM
[-]gjm4y50

Nitpick: you write "Eliezar" throughout where it should be "Eliezer".

Fixed, thanks!

Continued nitpick: the first word of the post is now "Eleizer's", when it should be "Eliezer's" ;-)

Aaaaargh fixed 

So much of LessWrong's early writings are steeped in scientific findings that died in replication.

Uh-oh, I didn't know about this. Does anyone know which ones?

My fear about systematized rationality is that it supplies us with methods and expected conclusions, [...]. I'm still a believer in the kind of art that undermines your confidence in the answers it provides. 

What? What are the "expected conclusions" of rationality? My understanding was that rationality is supposed to be *exactly* the kind of art you describe in the second sentence here.

Disclaimer: I sort of skimmed this post, maybe I'm missing something.

Here's a quote from one of Eliezar's early blog posts (2006):

Very recently—in just the last few decades—the human species has acquired a great deal of new knowledge about human rationality. The most salient example would be the heuristics and biases program in experimental psychology. There is also the Bayesian systematization of probability theory and statistics; evolutionary psychology; social psychology. Experimental investigations of empirical human psychology; and theoretical probability theory to interpret what our experiments tell us; and evolutionary theory to explain the conclusions. These fields give us new focusing lenses through which to view the landscape of our own minds. With their aid, we may be able to see more clearly the muscles of our brains, the fingers of thought as they move. We have a shared vocabulary in which to describe problems and solutions. Humanity may finally be ready to synthesize the martial art of mind: to refine, share, systematize, and pass on techniques of personal rationality.

Social psychology was one of Eliezer's top three sources of for his program to overcome bias. It took a body blow in the replication crisis.

Read through Eliezer's posts or CFAR's handbook, and it will be transparently clear that rationality, for them, is not an objective procedure. It's a thoroughly human act, and it's also a lifestyle and an attitude. Eliezer spills many pixels talking in Biblical tones about how we should feel. Much of systematized rationality are techniques for investigating and producing feelings that are believed, through introspection, to be associated with more rational thought. As one example, look at the technique called "Boggling" in the CFAR handbook.

It is the systematization of these intuitive, introspection-based techniques that I'm worried about. Now that some self-appointed experts with a nonprofit have produced this (genuinely valuable) material, it makes it easier for people to use the techniques with the expectation of the results the creators tell them they'll receive, rather than doing their own introspection and coming up with original insights and contradictory findings.

Now, where else have I heard of that sort of thing before?

As a quick sidenote on this (though I really wish someone would finally write the "this is how the replication crisis affected all the rationality stuff from the early 2010s" sequence), most of the stuff in the sequences wasn't that badly hit by the replication crisis. In particular Kahneman's work generally survived the replication crisis much better than the average work in social psychology, though it still took some hits. 

I have lots of thoughts on the details here that I sadly don't have time to write up. Overall my sense is that there are definitely quite a few posts on the sequences that are now highly epistemically dubious, but that most of them actually replicated quite well, and that the worst offenders have been removed from the recent editions of R:A-Z (like the post referencing Robber's Cave, which sure turned out to be primarily just scientific malpractice).

One obstacle to discovering how the sequences were affected is that some of the dependencies on psychology/sociology/etc might not be explicitly called out, or might not even have been explicit in Eliezer's own mind as he wrote. But I would just say that means we'll have to work harder at sussing out the truth.

I too would like to see that sequence. As a start, is there a list of posts have been removed from R:A-Z?

I want to begin my response by noting that I'm in the stage of learning about rationality where I feel that there are still things I don't yet know that, when I learn them, will flush some old conclusions completely down the toilet. (I think this is what Nick Bostrom calls a crucial consideration). So, if there's evidence and/or reasoning motivating your position beyond that which you've shared already, you should make sure to identify it and let me know what it is, and it might genuinely change my position.

That said, I think the arguments I see in this comment are flawed. Before I say why, let me first say exactly what I think the points of disagreement are. First, the replication crisis. I think the following statement (written by me, but taken partly from your post) is one you would agree with and I am rather skeptical of:

Many of the conclusions found in LessWrong's early writings have been cast into doubt, on account of having relied on social psychology results that have been cast into doubt.

I read the first few books of the sequences about a year ago, and then I read all of the sequences a couple of months ago. From what I recall, the heuristics & biases program and Bayesian statistics played a dominant role in generating his conclusions, with some evolutionary theory serving to exemplify shortcomings in human reasoning by contrasting what evolutionary theorists used to believe with what we now know (see the Simple Math of Evolution sequence). I don't recall much reliance on social psychology, though I also don't have a very good grasp on what that field studies, so I might not recognize its findings when I see them. Are there specific examples of posts you can give whose conclusions you think (a) rely on results that failed replication and (b) are dubious because of it?

I'd like to note that, although I haven't checked his examples myself myself, I suspect Eliezer knew to be careful about this kind of thing. In How They Nail It Down he explains that a handful of scientific studies aren't enough to believe a phenomenon is real, but that a suite of hundreds of studies, each pitting the orthodox formulation against some alternate interpretation and finding the orthodox interpretation superior, is. He uses the Conjunction Fallacy, one of his go-to examples of human bias, as an example of a phenomenon that passes the test. Perhaps Eliezer managed to identify the phenomena which had not yet been nailed down (and would go on to fail replication) and managed not to rely on them?

Now the second disagreement. I think you would say, and I would not, that:

Rationality has expected conclusions, such as "AI is a serious problem" or "the many-worlds interpretation of quantum physics is the correct one", that you are supposed to come to. Furthermore, you are not supposed to doubt these conclusions -- you're just supposed to believe them.

I admit that Eliezer's position on doubt is more nuanced than I was remembering it as I wrote everything above this sentence. But have a look at The Proper Use of Doubt, from the sequence Letting Go. In this essay, he warns against having doubts that are too ineffectual; in other words, he advises his audience to make sure they act on their doubts, and that, if appropriate, the process of acting on their doubts actually results in "tearing a cherished belief to shreds." (emphasis mine).

[...] rationality, for them, is not an objective procedure. It's a thoroughly human act, and it's also a lifestyle and an attitude.

I'm not entirely sure what you're getting at with the "objective procedure / human act" distinction. Based only on the labeles, I would tentatively agree that rationality is very much a human act. Overcoming biases specific to the human brain is one of its pillars, after all. But I'm not sure what this has to do with either of the points I raised in my comment. Maybe you could put it another way?

It is the systematization of these intuitive, introspection-based techniques that I'm worried about. Now that some self-appointed experts with a nonprofit have produced this (genuinely valuable) material, it makes it easier for people to use the techniques with the expectation of the results the creators tell them they'll receive, rather than doing their own introspection and coming up with original insights and contradictory findings.

Now, where else have I heard of that sort of thing before?

You've probably seen something like it at the heart of every knowledge-gathering endeavor that lasted more than one generation. Everything I know about particle physics was taught to me; none of it derives from original thought on my part. This includes the general attitude that the universe is made of tiny bits whose behavior can be characterized very accurately by mathematical equations. If I wanted to derive knowledge myself, I would have to go out to my back yard and start doing experiments with rocks -- unaware not only of facts like the mass of a proton, not only of the existence of protons, but also of the existence of knowledge such as "protons exist". I would never cross that gap in a single lifetime.

It seems to me that there is a trade-off between original thought, which is good, and speed of development of a collaborative effort, which is also good. Telling your students more results in faster development, but less original thought and therefore less potential to catch mistakes. Telling them less results in more original thought but therefore more wheel-reinvention. I admit that there will be some tendency for people to read about techniques of rationality and then immediately fall victim to the placebo effect. But I think there is also some tendency for Eliezer and CFAR to be smart, say true & useful things, and then pass them on to others who go on to get good use out of them.

Would you agree with that last statement? Do you think my "trade-off" analysis is appropriate? If so, is it just that you think the rationalist community leans too far towards teaching-much and too far away from teaching-little? Or have I completely mis-characterized the problem you see in rationalist teachings (exemplified by Boggling)?

In a normal scientific field, you build a theory, push it to the limit with experimental evidence, and then replace it with something better when it breaks down.

LW-style rationality is not a normal scientific field. It's a community dialog centered around a shared set of wisdom-stories. These wisdom-stories are based on the author's own lives, and are interpreted through the lens of psychology, economics, statistics, and game theory.

I posit that we are likely to be an average example of such a community, with an average amount of wisdom and an average set of foibles. One of those foibles will be the perception that the leaders know what they're doing and that we can trust them as guides.

Another will be a redirection of focus to building and aligning an outside model of ourselves as we are with a model our ideal, and a consequent downplaying of the idea that we're messy human beings whose internal experience are hard to pin down.

And a third will be a perception that we cannot access the same wisdom that the original writers drew from their own lives and generalized into concepts and rules for living. Instead, we read their concepts and rules, internalize them, and try to fit them to our lived experience. They had an experience, a reaction to it, built on it, and when the time was right, built a conceptual tool out of it. We take the tool and look for nails to pound in our own lives. That can be a powerful strategy, if you know precisely what your hammers and nails are, can truly focus in on one problem, and know when you've solved it.

My anxiety is that I/we are getting off-track, alienated from ourselves, and obsessed with proxy metrics for rationality. Anyone familiar with the dilemma of AI alignment should see that this is a fundamental problem for any intelligence.

One way of dealing with this is to create more proxy metrics. Rationalist writings and techniques are so voluminous and accessible that I suspect we over-weight them, and that our community has succumbed to some extent to Goodhart's law and the streetlight effect. We focus on what life changes fit into the framework or what will be interesting to others in this community, rather than what we actually need to do. I'd like to see more storytelling and attempts to draw original wisdom from them, and more contrarian takes/heresy.

However, as I wrote this response, I also realized that maybe it would be interesting to just pick one of the systematized "hammers" of rationality technique, and just start pounding as many nails as possible with it to see what happens. So I changed my own mind to some extent. Maybe I haven't taken these ideas seriously enough.

Sorry that this is all horrible horrible punditry, darkly hinting and with no verifiable claims, but I don't have the time to make it sharper. 

LessWrong itself seems to me a fairly broad forum where a lot of different ideas are discussed. As far as the broad community goes it seems to me like instead of Goodharting the community often persues different goals and maybe even goals that are different enough that some older ideas aren't persued anymore because people are less interested in certain framing. 

In a normal scientific field, you build a theory, push it to the limit with experimental evidence, and then replace it with something better when it breaks down.

LW-style rationality is not a normal scientific field.

I was under the impression that CFAR was doing something like this, using evidence to figure out which techniques actually do what they seem like they're doing. If not... uh-oh! (Uh-oh in the sense that I beleived something for no reason, not in the sense that CFAR would therefore be badwrong in my eyes.)

It's a community dialog centered around a shared set of wisdom-stories. [...] I posit that we are likely to be an average example of such a community, with an average amount of wisdom and an average set of foibles.

I'm not sure I know what kind of community you're talking about. Are there other readily-available examples?

One of those foibles will be [...] Another will be [...] And a third will be [...]

How do you know?

More charitably, I do think these are real risks. Especially the first, which I think I may fall victim to, at least with Eliezer's writings.

My anxiety is that I/we are getting off-track, alienated from ourselves, and obsessed with proxy metrics for rationality. [...]  We focus on what life changes fit into the framework or what will be interesting to others in this community, rather than what we actually need to do. I'd like to see more storytelling and attempts to draw original wisdom from them, and more contrarian takes/heresy.

My current belief (and strong hope) is that the attitude of this community is exactly such that if you are right about that, you will be able to convince people of it. "You're not making improvements, you're just roleplaying making improvements" seems like the kind of advice a typical LessWronger would be open to hearing.

By the way, I saw your two recent posts (criticism of popular LW posts, praise of popular LW posts) and I think they're good stuff. The more I think on this, the more I wonder if the need for "contrarian takes" of LW content has been a blind spot for me in my first year of rationality. It's an especially insidious one if so, because I normally spit out contrarian takes as naturally as I breathe.

Sorry that this is all horrible horrible punditry, darkly hinting and with no verifiable claims, but I don't have the time to make it sharper.

I've been there! ^^

I was under the impression that CFAR was doing something like this, using evidence to figure out which techniques actually do what they seem like they're doing.

They sort of are, in that they grade the epistemic status of the techniques they teach by anecdotal reports of their users, or any scientific evidence or empirically-backed theory that seems related. To my knowledge, they're not running RCTs.

The Wikipedia page on normal science states:

Kuhn stressed that historically, the route to normal science could be a difficult one. Prior to the formation of a shared paradigm or research consensus, would-be scientists were reduced to the accumulation of random facts and unverified observations, in the manner recorded by Pliny the Elder or Francis Bacon, while simultaneously beginning the foundations of their field from scratch through a plethora of competing theories.

Arguably at least the social sciences remain at such a pre-paradigmatic level today.

If the whole field of social science is pre-normal-science, then undoubtedly the work of CFAR is as well. They don't have even an appreciable fraction of the scientific resources that are thrown at social science. Of course, this might just be some STEM-lord Wikipedia editor's snark, though it's got a reference to some article in a Philosophy of Science book.

My understanding of the reason it gets characterized in this way is that in the social sciences, we don't see this shift from one uniform consensus about how reality works to a different uniform consensus. We do see that in the physical sciences and in math. This is by definition what differentiates normal from pre-paradigmatic science. That doesn't have to be a marker for what's true, useful, or legitimate, but it's something to be aware of.

I'm not sure I know what kind of community you're talking about. Are there other readily-available examples?

Any religious, professional, group support, or tribal society would be an example.

How do you know?

Anecdotal evidence based on common sense and life experience.

"You're not making improvements, you're just roleplaying making improvements" seems like the kind of advice a typical LessWronger would be open to hearing.

Hearing, but listening? Part of my fear is that we're practicing the performance of skepticism and open-mindedness, rather than the genuine article. That's not necessarily a bad thing - fake it 'til you make it, and all that. And if that's the best we have, then I'm glad we have it.

By the way, I saw your two recent posts (criticism of popular LW posts, praise of popular LW posts) and I think they're good stuff. The more I think on this, the more I wonder if the need for "contrarian takes" of LW content has been a blind spot for me in my first year of rationality.

Thank you! I don't think that contrarianism is the most important aspect of my posts. Others in the comments of the original articles posted similar objections to my own, and they're not canon by any means. And heck, one of them was an article of praise.

Instead, I think it's the idea of generativity. In mainstream scientific scholarship, there's a sense that articles not only reference each other, but they test, apply and respond to each other. We have lots of reference here on LW, but not too much of the latter. This can make it hard to separate ideas from the individuals who convey them, or from the original language they get presented in. It also means that writers on this blog anticipate only short-term scrutiny in the form of comments. Long-term scrutiny in the form of posts that might not just reference but be exclusively focused on confronting a piece from a year or a decade ago is much less common.

I actually wish we had this in journalism and especially in punditry.

The result of that dynamic is that the hottest takes produce long-lasting memes that take on truthiness merely because everybody uses them and references the article that produced them. To actually confront the writer would be to confront a whole community that's used to those ideas, damn it! And the incentive remains to optimize writing for the hot take, not the deeply considered argument that can stand up to direct, careful scrutiny that goes beyond one comment buried amongst 10 or 100 others.

Figuring out how to shift this culture productively will take some experimentation. You don't want to create too many bad feelings that will result in a chilling effect for the whole forum. Nor do you want to raise the bar for posting so high that people feel like it's too much effort. Nobody's getting paid to post here, and lots of people already feel intimidated by this forum.

I think that a couple ideas have the most general promise. One is praise articles, especially of lesser-known articles or writers. Another is criticism of weak articles by well-known authors, since they have enough social capital that you don't look like you're punching down. But I have no evidence to back that up. This is emphatically not normal science!

I don't recognize the connection you're drawing between good storytelling and identity politics. I would've said good storytelling touches people regardless of their identity group.

If Eliezer's stories only matter to people who are Jewish, millennial, and male, then I'd say he's not telling very good stories (or at least not very useful ones).

Those aren't really the identities he speaks to. I'd say he's more autodidact, atheist, STEM-oriented Silicon Valley futurist.

My take is that original storytelling arises out of the particular circumstances and personality - the identity - that shapes the author's perceptions of the world. That give them their particular voice and bias. Grappling with that special identity, often but not always through many layers of artifice, gives rise to a special story.

Unoriginal storytelling attracts interest based on its direct appeal to more universal/instinctual fascinations. Sex! Violence! Status!

I see what you're saying now. There is a Motte & Bailey of identity politics where:

Motte = identity is just your particular circumstances and personality

Bailey = The most important ways of understanding people and their ideas are along the lines of race/sex/orientation/(other unalterable traits).

Sounds like you didn't actually mean to use that bailey. I've seen it used a lot elsewhere, so that's what I read here (I even wrote about it here).

Yeah, I'd like to see the term "identity politics" used as a more encompassing term, in which we are freely synthesizing the many aspects of our inner being and social context in order to better analyze the particular ways in which irrationality manifests in our lives. When that's done well, it's compelling. I fear that the phenomenon of the culture wars have degraded our ability to do this. People are too afraid to do original and intuitive thinking about the circumstances of their own individual lives. It's safer to accept your role as a living statistic, but you're not. You're a human being with a story to tell.

I'd like to see the term "identity politics" used as a more encompassing term

Why? That muddles the term and makes it harder to speak about important things that are happening currently. 

People are too afraid to do original and intuitive thinking about the circumstances of their own individual lives. 

How people think about their own lives is not political thinking and thus not identity politics. Political thinking is about how you interact with other people. 

How people think about their own lives is not political thinking and thus not identity politics. Political thinking is about how you interact with other people. 

This statement seems both wrong and deeply implausible to me. Are there a few forms of thought that don't really concern other people? Sure. That's things like playing Sudoku, or getting a song stuck in your head, or maybe working on pure mathematical research. Almost every other form of deliberate thought cashes out to interacting with other people and influencing the polis.

Why? That muddles the term and makes it harder to speak about important things that are happening currently.

Because thinking about our lives, which I deem a political act, is based on private information that's heavily correlates with the various forms of identity that are important to us. This makes perfect sense. The problems I face, solutions I find, and overall attitude towards life that I develop will be heavily shaped by the mutable and permanent factors that make up my identity. They'll seem most sensible and be most useful to others who share those characteristics. The people I know will tend to be like me, because we associate with those who have the most targeted useful information to share with us.

It should be no surprise when the attitudes, priorities, and practices of other groups don't align with our own, and when they find us strange in turn. We have different identities and therefore interact with the polis in different ways. The output of our communities in speech and action is a form of identity politics.

The drive in this community to downplay conventional identity characteristics, such as race, gender, orientation, and so on, in favor of either a universal identity (as rational humans) or an adoptable identity (futurists, utilitarians, STEM-oriented people, atheists, etc), speaks to a particular need that people with this constellation of identities have. To someone who doesn't share these identities, the amount of weight we put onto this wouldn't make sense. To others, our insistence on downplaying conventional identities and denying that we're engaging in identity politics might seem disingenuous or even sinister. They'll have reasons for this that will make sense to them based on the particular problems they face because of the shared patterns of their own lives - their shared identity.

I find it comforting and helpful to understand that we're all engaged in identity politics. That it's OK. That it's normal for it to produce deep disagreement. What's happening here is a collective attempt to triangulate useful ideas to our allies who are most likely to find them sensible and helpful.

I need to move on from this line of thought, so I hope you find this reply helpful. I'm committing to this being my last response along these lines within this post.