Ruby's Public Drafts & Working Notes

by Ruby1 min read23rd Feb 201927 comments

A nice name would be: Ruby's Random Ramblings about Rationality. Well, it's a very nice alliteration but a little misleading - probably won't be that random or rambly.

Please don't create top-level comments here, but feel free to reply to comments.

27 comments, sorted by Highlighting new comments since Today at 4:42 PM
New Comment

Selected Aphorisms from Francis Bacon's Novum Organum

I'm currently working to format Francis Bacon's Novum Organum as a LessWrong sequence. It's a moderate-sized project as I have to work through the entire work myself, and write an introduction which does Novum Organum justice and explains the novel move of taking an existing work and posting in on LessWrong (short answer: NovOrg is some serious hardcore rationality and contains central tenets of the LW foundational philosophy notwithstanding being published back in 1620, not to mention that Bacon and his works are credited with launching the modern Scientific Revolution)

While I'm still working on this, I want to go ahead and share some of my favorite aphorisms from is so far:

3. . . . The only way to command reality is to obey it . . .

9. Nearly all the things that go wrong in the sciences have a single cause and root, namely: while wrongly admiring and praising the powers of the human mind, we don’t look for true helps for it.

Bacon sees the unaided human mind as entirely inadequate for scientific progress. He sees for the way forward for scientific progress as constructing tools/infrastructure/methodogy to help the human mind think/reason/do science.

10. Nature is much subtler than are our senses and intellect; so that all those elegant meditations, theorizings and defensive moves that men indulge in are crazy—except that no-one pays attention to them. [Bacon often uses a word meaning ‘subtle’ in the sense of ‘fine-grained, delicately complex’; no one current English word will serve.]

24. There’s no way that axioms •established by argumentation could help us in the discovery of new things, because the subtlety of nature is many times greater than the subtlety of argument. But axioms •abstracted from particulars in the proper way often herald the discovery of new particulars and point them out, thereby returning the sciences to their active status.

Bacon repeatedly hammers that reality has a surprising amount of detail such that just reasoning about things is unlikely to get at truth. Given the complexity and subtlety of nature, you have to go look at it. A lot.

28. Indeed, anticipations have much more power to win assent than interpretations do. They are inferred from a few instances, mostly of familiar kinds, so that they immediately brush past the intellect and fill the imagination; whereas interpretations are gathered from very various and widely dispersed facts, so that they can’t suddenly strike the intellect, and must seem weird and hard to swallow—rather like the mysteries of faith.

Anticipations are what Bacon calls making theories by generalizing principles from a few specific examples and the reasoning from those [ill-founded] general principles. This is the method of Aristotle and science until that point which Bacon wants to replace. Interpretations is his name for his inductive method which only generalizes very slowly, building out slowly increasingly large sets of examples/experiments.

I read Aphorism 28 as saying that Anticipations have much lower inferential distance since they can be built simple examples with which everyone is familiar. In contrast, if you build up a theory based on lots of disparate observation that isn't universal, you know have lots of inferential distance and people find your ideas weird and hard to swallow.

All quotations cited from: Francis Bacon, Novum Organum, in the version by Jonathan Bennett presented at

Please note that even things written in 1620 can be under copyright. Not the original thing, but the translation, if it is recent. Generally, every time a book is modified, the clock starts ticking anew... for the modified version. If you use a sufficiently old translation, or translate a sufficiently old text yourself, then it's okay (even if a newer translation exists, if you didn't use it).

Yup – Ruby/habryka specifically found a translation that we're allowed to post.

I'm a complete newcomer to information on Bacon and his time. How much of his influence was due to Novum Organum itself vs other things he did? If significantly the latter, what were those things? Feel free to tell me to Google that.

At the very least "The New Atlantis", a fictional utopian novel he wrote, was quite influential, at least in that it's usually cited as one of the primary inspirations for the founding of the royal society:

Why I'm excited by the 2018 Review

I generally fear that perhaps some people see LessWrong as a place where people just read and discuss "interesting stuff", not much different from a Sub-Reddit on anime or something. You show up, see what's interesting that week, chat with your friends. LessWrong's content might be considered "more healthy" relative to most internet content and many people say they browse LessWrong to procrastinate but feel less guilty about than other browsing, but the use-case still seems a bit about entertainment.

None of the above is really a bad thing, but in my mind, LessWrong is about much more than a place for people to hang out and find entertainment in sharing joint interests. In my mind, LessWrong is a place where the community makes collective progress on valuable problems. It is an ongoing discussion where we all try to improve our understanding of the world and ourselves. It's not just play or entertainment– it's about getting somewhere. It's as much like an academic journal where people publish and discuss important findings as it is like an interest-based sub-Reddit.

And all this makes me really excited by the LessWrong 2018 Review. The idea of the review is to identify posts that have stood the test of time and have made lasting contributions to the community's knowledge and meaningfully impacted people's lives. It's about finding the posts that represent the progress we've made.

During the design of the review (valiantly driven by Raemon), I was apprehensive that people would not feel motivated by the process and put in the necessary work. But less than 24 hours after launching, I'm excited by the nominations and what people are writing in their nomination comments.

Looking at the list of nominations so far and reading the comments, I'm thinking "Yes! This is a list showing the meaningful progress the LW community has made. We are not just a news or entertainment site. We're building something here. This is what we're about. So many great posts that have helped individuals and community level up. Stuff I'm really proud of." There are posts about communication, society narratives, AI, history, honesty, reasoning and argumentation, and more: each crystallizing concepts and helping us think about reality better, make better decisions.

I am excited that by the end of the process we will be able to point to the very best content from 2018, and then do that for each year.

Of late, I've been thinking a lot about how to make LessWrong's historical corpus of great content more accessible: search/tagging/Wiki's. We've got a lot of great content that does stand the test of time. Let's make it easy for people to find relevant stuff. Let it be clear that LW is akin to a body of scientific work, not Reddit or FB. Let this be clear so that people feel enthused to contribute to our ongoing progress, knowing that if they write something good, it won't merely be read and enjoyed this week, it'll become part of communal corpus to be built upon. Our project of communal understand and self-improvement.

Communal Buckets

A bucket error is when someone erroneously lumps two propositions together, e.g. I made a spelling error automatically entails I can't be great writer, they're in one bucket when really they're separate variables.

In the context of criticism, it's often mentioned that people need to learn to not make the bucket error of I was wrong or I was doing a bad thing -> I'm a bad person. That is, you being a good person is compatible with making mistakes, being wrong, and causing harm since even good people make mistakes. This seems like a right and true and a good thing to realize.

But I can see a way in which being wrong/making mistakes (and being called out for this) is upsetting even if you personally aren't making a bucket error. The issue is that you might fear that other people have the two variables collapsed into one. Even if you might realize that making a mistake doesn't inherently make you a bad person, you're afraid that other people are now going to think you are a bad person because they are making that bucket error.

The issue isn't your own buckets, it's that you have a model of the shared "communal buckets" and how other people are going to interpret whatever just occured. What if the community/social reality only has a single bucket here?

We're now in the territory of common knowledge challenges (this might not require full-blown common knowledge, but each person knowing what all the others think). For an individual to no longer be worried about automatic entailment between "I was wrong -> I'm bad", they need to be convinced that no one else is thinking that. Which is hard, because I think that people do think that.

(Actually, it's worse, because other people can "strategically" make or not make bucket errors. If my friend does something wrong, I'll excuse it and say they're still a good person. If it's someone I already disliked, I'll take any wrongdoing is evidence of their inherent evil nature. There's a cynical/pessimistic model here where people are likely to get upset anytime something is shared which might be something they can be attacked with (e.g. criticism of their mistakes of action/thought), rightly or wrongly.)

"did a bad thing" -> "bad person" may not be a bucket error, it may be an actual inference (if "bad person" is defined as "person who does bad things"), or a useless category (if "bad person" has no actual meaning).

This question seems to be "fear of attribution error". You know you have reasons for things you do, others assume you do things based on your nature.

Yeah, I think the overall fear would be something like "I made a mistake but now overall people will judge me as a bad person" where "bad person" is above some threshold of doing bad. Indeed, each bad act is an update towards the threshold, but the fear is that in the minds of others, a single act will be generalized and put you over. The "fear of attribution error" seems on the mark to me.

It feels like the society I interact with dislikes expression of negative emotions, at least in the sense that expressing negative emotions is kind of a big deal - if someone expresses a negative feeling, it needs to be addressed (fixed, ideally). The discomfort with negative emotions and consequent response acts to a fair degree to suppress their expression. Why mention something you're a little bit sad about if people are going to make a big deal out of it and try to make you feel better, etc., etc.?

Related to the above (with an ambiguously directed causal arrow) is that we lack reliable ways to communicate about negative emotions with something like nuance or precision. If I think imagine starting a conversation with a friend by saying "I feel happy", I expect to be given space to clarify the cause, nature, and extent of my happiness. Having clarified these, my friend will react proportionally. Yet when I imagine saying "I feel sad", I expect this to be perceived as "things are bad, you need sympathy, support, etc." and the whole stage of "clarify cause, nature, extent" is skipped instead proceeding to a fairly large reaction.

And I wish it wasn't like that. I frequently have minor negative emotions which I think are good, healthy, and adaptive. They might persist for one minute, five minute, half a day, etc. The same as with my positive emotions. When | get asked how I am, or I'm just looking to connect with others by sharing inner states, then I want to be able to communicate my inner state - even when it's negative - and be able to communicate that precisely. I want to be given space to say "I feel sad on the half-hour scale because relatively minor bad thing X happened” vs "I'm sad on the weeks scale because a major negative life event happened." And I want to be able to express the former without it being a bid deal, just a normal thing that sometimes slightly bad things happens and you're slightly sad.

The specific details are probably gender-specific.

Men are supposed to be strong. If they express sadness, it's like a splash of low status and everyone is like "ugh, get away from me, loser, I hope it's not contagious". On the other hand, if they express anger, people get scared. So men gradually learn to suppress these emotions. (They also learn that words "I would really want you to show me your true feelings" are usually a bait-and-switch. The actual meaning of that phrase is that the man is supposed to perform some nice emotion, probably because his partner feels insecure about the relationship and wants to be reassured.)

Women have other problems, such as being told to smile when something irritates them... but this would be more reliably described by a woman.

But in general, I suppose people simply do not want to empathize with bad feelings; they just want them to go away. "Get rid of your bad feeling, so that I am not in a dilemma to either empathize with you and feel bad, or ignore you and feel like a bad person."

A good reaction would be something like: "I listen to your bad emotion, but I am not letting myself get consumed by it. It remains your emotion; I am merely an audience." Perhaps it would be good to have some phrase to express that we want this kind of reaction, because from the other side, providing this reaction unprompted can lead to accusations of insensitivity. "You clearly don't care!" (By feeling bad when other people feel bad we signal that we care about them. It is a costly signal, because it makes us feel bad, too. But in turn, the cost is why we provide all kinds of useless help just to make it go away.)

Just a thought: there's the common advice that fighting all out with the utmost desperation makes sense for very brief periods, a few weeks or months, but doing so for longer leads to burnout. So you get sayings like "it's a marathon, not a sprint." But I wonder if length of the "fight"/"war" isn't the only variable in sustainable effort. Other key ones might be the degree of ongoing feedback and certainty about the cause.

Though I expect a multiyear war which is an existential threat to your home and family to be extremely taxing, I imagine soldiers experiencing less burnout than people investing similar effort for a far-mode cause, let's say global warming which might be happening, but is slow and your contributions to preventing it unclear. (Actual soldiers may correct me on this, and I can believe war is very traumatizing, though I will still ask how much they believed in the war they were fighting.)

(Perhaps the relevant variables here are something like Hanson's Near vs Far mode thinking, where hard effort for far-mode thinking more readily leads to burnout than near-mode thinking even when sustained for long periods.)

Then of course there's generally EA and X-risk where burnout is common. Is this just because of the time scales involved, or is it because trying to work on x-risk is subject to so much uncertainty and paucity of feedback? Who knows if you're making a positive difference? Contrast with a Mario character toiling for years to rescue the princess he is certain is locked in a castle waiting. Fighting enemy after enemy, sleeping on cold stone night after night, eating scraps. I suspect Mario, with his certainty and much more concrete sense of progress, might be able expend much more effort and endure much more hardship for much longer than is sustainable in the EA/X-risk space.

Related: On Doing the Improbable

A random value walks into a bar. A statistician swivels around in her chair, one tall boot unlaced and an almost full Manhattan sitting a short distance from her right elbow.

"I've been expecting you," she says.

"Have you been waiting long?" respond the value.

"Only for a moment."

"Then you're very on point."

"I've met enough of your kind that there's little risk of me wasting time."

"I assure you I'm quite independent."

"Doesn't mean you're not drawn from the same mold."

"Well, what can I do for you?"

"I was hoping to gain your confidence..."

Some Thoughts on Communal Discourse Norms

I started writing this in response to a thread about "safety", but it got long enough to warrant breaking out into its own thing.

I think it's important to people to not be attacked physically, mentally, or socially. I have a terminal preference over this, but also think it's instrumental towards truth-seeking activities too. In other words, I want people to actually be safe.

  • I think that when people feel unsafe and have defensive reactions, this makes their ability to think and converse much worse. It can push discussion from truth-seeking exchange to social war.
    • Here I think mr-hire has a point: if you don't address people's "needs" overtly, they'll start trying to get them covertly, e.g. trying to win arguments for the sake of protecting their reputation rather than trying to get to the truth. Doing things like writing hasty scathing replies rather slow, carefully considered ones (*raises hand*), and worse, feeling righteous anger while doing so. Having thoughts like "the only reason my interlocutor could think X is because they are obtuse due to their biases" rather than "maybe they have I point I don't fully realize" (*raises hand*).
  • I want to avoid people being harmed and also feeling like they won't be harmed (but in a truth-tracking way: if you're likely to be attacked, you should believe it). I also think that protective measures are extremely risky themselves for truth-seeking. There is a legitimate fear here a) people can use the protections to silence things they don't like hearing, b) it may be onerous and stifle honest expression to have to constrain one's speech, c) fear of being accused of harming others stifles expression of true ideas, d) these protections will get invoked in all kind of political games.
  • I think the above a real dangers. I also think it's dangerous to have no protections against people being harmed, especially if they're not even allowed to object to be harmed. In such an arrangement, it becomes too easy to abuse the "truth-seeking free speech" protections to socially attack and harm people while claiming impunity. Some of it's truth-seeking ability lost to becoming partly a vicious social arena.

I present the Monkey-Shield Allegory (from an unpublished post of mine):

Take a bunch of clever monkeys who like to fight with each other (perhaps they throw rocks). You want to create peace between them, so you issue them each with a nice metal shield which is good at blocking rocks. Fantastic! You return the next day, and you find that the monkeys are hitting each with the mental shields (turns out if you whack someone with a shield, their shield doesn’t block all the force of the blow and it’s even worse than fighting with rocks).

I find it really non-obvious what the established norms and enforced policies should be. I have guesses, including a proposed set of norms which are being debated in semi-private and should be shared more broadly soon.Separate from the question I have somewhat more confidence in the following points and what they imply for individual.

1. You should care about other people and their interests. Their feelings are 1) real and valuable, and 2) often real information about important states of the world for their wellbeing. Compassion is a virtue.

    • Even if you are entirely selfish, understanding and caring about other people is instrumentally advantageous for your own interests and for the pursuit of truth.

2. Even failing 1, you should try hard to avoid harming people (i.e. attacking them) and only do so when you really mean to. It's not worth it to accidentally do it if you don't mean to.

3. I suspect many people of possessing deep drives to always be playing monkey-political games, and these cause them to want to win points against each other however they can. Ways to do that include being aggressive, insulting people, etc, baiting them, and all the standard behaviors people engage in online forums.

  • These drives are anti-cooperative, anti-truth, and zero-sum. I basically think they should be inhibited and instead people should cultivate compassion and ability to connect.
  • I think people acting in these harmful ways often claim their behaviors are fine by attributing to some more defensible cause. I think there are defensible reasons for some behaviors, but I get really suspicious when someone consistently behaves in a way that doesn't further their stated aims.
  • People getting defensive are often correctly perceiving that they are being attacked by others. This makes me sympathetic to many cases of people being triggered.

4. Beyond giving up on the monkey-games, I think that being considerate and collaborative (including the meta-collaborative within a Combat culture) costs relatively little most of the time. There might be some upfront costs to change one's habits and learn to be sensitive, but long run the value of learning them pays off many times over in terms of being able to have productive discussions where no one is getting defensive + plus that seems intrinsically better for people to be having a good time. Pleasant discussions provoke more pleasant discussions, etc.

* I am not utterly confident in the correctness of 4. Perhaps my brain devotes more cycles to being considerate and collaborative than I realize (as this slowly ramped up over the years) and it costs me real attention that could go directly to object-level thoughts. Despite the heavy costs, maybe it is just better to not worry about what's going on in other people's minds and not expend effort optimizing for it. I should spend more time trying to judge this.

5. It is good to not harm people, but it also good to build one's resilience and "learn to handle one's feelings." That is just plainly an epistemically virtuous thing to do. One ought to learn how to become less often and also how to operate sanely and productively while defensive. Putting all responsibility onto others for your psychological state is damn risky. Also 1) people who are legitimately nasty sometimes still have stuff worth listening to, you don't want to give up on that; 2) sometimes it won't be the extraneous monkey-attack stuff that is upsetting, and instead the core topic - you want to be able to talk about that, 3) misunderstandings arise easily and it's easy to feel attacked when you aren't being, some hardiness to protection again misunderstandings rapidly spiralling into defensiveness and demonthreads.

6. When discussing topics online, in-text, and with people you don't know, it's very easy to miscalibrated on intentions and the meaning behind words (*raises hand*). It's easy for their to be perceived attacks even when no attacks are intended (this is likely the result of a calibrated prior on the prevalence of social attacks).

a. For this reason, it's worth being a little patient and forgiving. Some people talk a bit sarcastically to everyone (which is maybe bad), but it's not really intended as an attack on you. Or perhaps they were plainly critical, but they were just trying to help.

b. When you are speaking, it's worth a little extra effort to signal that you're friendly and don't mean to attack. Maybe you already know that and couldn't imagine otherwise, but a stranger doesn't. What counts as an honest signal of friendly intent is anti-inductive, if we declare to be something simple, the ill-intentioned by imitate it by rote, go about their business, and the signal will lose all power to indicate the friendliness. But there are lots of cheap ways to indicate you're not attacking, that you have "good will". I think they're worth it.

In established relationships where the prior has become high that you are not attacking, less and less effort needs to be expended on signalling your friendly intent and you can get talk plainly, directly, and even a bit hostilly (in a countersignalling way). This is what my ideal Combat culture looks like, but it relies of having a prior and common knowledge established of friendliness. I don't think it works to just "declare it by fiat."

I've encountered push back when attempting to 6b. I'll derive two potential objections (which may not be completely faithful to those originally raised):

Objection 1: No one should be coerced into having to signal friendliness/maintain someone else's status/generally worry about what impact their saying true things will have. Making them worry about it impedes the ability to say true things which is straightforwardly good.

Response: I'm trying to coerce anyone into doing this. I'm trying to make the case you should want to do this of your own accord. That this is good and worth it and in fact results in more truth generation than otherwise. It's a good return of investment. There might be an additional fear that if I promote this as virtuous behavior, it might have the same truth-impeding effects as it if was policy. I'm not sure, I have to think about that last point more.

Objection 2: If I have to signal friendly intent when I don't mean it, I'd be lying.

Response: Then don't signal friendly intent. I definitely don't want anyone to pretend or go through the motions. However I do think you should probably be trying to have honestly friendly intent. I expect conversations with friendly intent to be considerable better than those without (this is something of a crux for me here), so if you don't have it towards someone, that's real unfortunate, and I am pessimistic about the exchange. Barring exceptional circumstances, I generally don't want to talk to people who do not have friendly intent/desire to collaborate (even just at the meta-level) towards me.

What do I mean by friendly intent? I mean that you don't have goals to attack, win, or coerce. It's an exchange intended for the benefit of both parties where you're not the side acting in a hostile way. I'm not pretending to discuss a topic with you when actually I think you're an idiot and want to demonstrate it to everyone, etc., I'm not trying to get an emotional reaction for my own entertainment, I'm not just trying to win with rhetoric rather than actually expose my beliefs and cruxes, if I'm criticizing, I'm not just trying to destroy you, etc. As above, many times this is missing and it's worth trying to signal its presence.

If it's absent, i.e. you actually want to remove someone from the community or think everyone should disassociate from them, that's sometimes very necessary. In that case, you don't have friendly intent and that's good and proper. Most of the time though (as I will argue), you should have friendly intent and should be able to honestly signal it. Probably I should elaborate and clarify further on my notion of friendly intent.

There are related notions to friendly intent like good faith, questions like "respect your conversation partner", think you might update based on what they say, etc. I haven't discussed them, but should.

Over the years, I've experienced a couple of very dramatic yet rather sudden and relatively "easy" shifts around major pain points: strong aversions, strong fears, inner conflicts, or painful yet deeply ingrained beliefs. My post Identities are [Subconscious] Strategies contains examples. It's not surprising to me that these are possible, but my S1 says they're supposed to require a lot of effort: major existential crises, hours of introspection, self-discovery journeys, drug trips, or dozens of hours with a therapist.

Have recently undergone a really big one, I noted my surprise again. Surprise, of course, is a property of bad models. (Actually, the recent shift occurred precisely because of exactly this line of thought: I noticed I was surprised and dug in, leading to an important S1 shift. Your strength as a rationalist and all that.) Attempting to come up with a model which wasn't as surprised, this is what I've got:

The shift involved S1 models. The S1 models had been there a long time, maybe a very long time. When that happens, they begin to seem how the world just *is*. If emotions arise from those models, and those models are so entrenched they become invisible as models, then the emotions too begin to be taken for granted - a natural way to feel about the world.

Yet the longevity of the models doesn’t mean that they’re deep, sophisticated, or well-founded. That might be very simplistic such that they ignore a lot of real-world complexity. They might have been acquired in formative years before one learned much of their epistemic skill. They haven’t been reviewed, because it was hardly noticed that they were beliefs/models rather than just “how the world is”.

Now, if you have a good dialog with your S1, if your S1 is amenable to new evidence and reasoning, then you can bring up the models in question and discuss them with your S1. If your S1 is healthy (and is not being entangled with threats), it will be open to new evidence. It might very readily update in the face of that evidence. “Oh, obviously the thing I’ve been thinking was simplistic and/or mistaken. That evidence is incompatible with the position I’ve been holding.” If the models shift, then the feelings shift.

Poor models held by an epistemically healthy "agent" can rapidly change when presented with the right evidence. This is perhaps not surprising.

Actually, I suspect that difficulty updating often comes from the S1 models and instances of the broccoli error: “If I updated to like broccoli then I would like broccoli, but I don’t like broccoli, so I don’t want that.” “If I updated that people aren’t out to get me then I wouldn’t be vigilant, which would be bad since people are out to get me.” Then the mere attempt to persuade that broccoli is pretty good / people are benign is perceived as threatening and hence resisted.

So maybe a lot of S1 willingness to update is very dependent on S1 trusting that it is safe, that you’re not going to take away any important, protective beliefs of models.

If there are occasions where I achieve rather large shifts in my feelings from relatively little effort, maybe it is just that I’ve gotten to a point where I’m good enough at locating the S1 models/beliefs that are causing inner conflict, good enough at feeling safe messing with my S1 models, and good enough at presenting the right reasoning/evidence to S1.

Hypothesis that becomes very salient from managing the LW FB page: "likes and hearts" are a measure of how much people already liked your message/conclusion*.

*And also like how well written/how alluring a title/how actually insightful/how easy to understand, etc. But it also seems that the most popular posts are those which are within the Overton window, have less inferential distance, and a likable message. That's not to say they can't have tremendous value, but it does make me think that the most popular posts are not going to be the same as the most valuable posts + optimizing for likes is not going to be same as optimizing for value.

**And maybe this seems very obvious to many already, but it just feels so much more concrete when I'm putting three posts out there a week (all of which I think are great) and seeing which get the strongest response.

***This effect may be strongest at the tails.

****I think this effect would affect Gordon's proposed NPS-rating too.

*****I have less of this feeling on LW proper, but definitely far from zero.

Narrative Tension as a Cause of Depression

I only wanted to budget a couple of hours for writing today. Might develop further and polish at a later time.

Related to and an expansion of Identities are [Subconscious] Strategies

Epistemic status: This is non-experimental psychology, my own musings. Presented here is a model derived from thinking about human minds a lot over the years, knowing many people who’ve experienced depression, and my own depression-like states. Treat it as a hypothesis, see if matches your own data and it generates helpful suggestions.

Clarifying “narrative”

In the context of psychology, I use the term narrative to describe the simple models of the world that people hold to varying degrees of implicit vs explicit awareness. They are simple in the sense of being short, being built of concepts which are basic to humans (e.g. people, relationships, roles, but not physics and statistics), and containing unsophisticated blackbox-y causal relationships like “if X then Y, if not X then not Y.”

Two main narratives

I posit that people carry two primary kinds of narratives in their minds:

  • Who I am (the role they are playing), and
  • How my life will go (the progress of their life)

The first specifies the traits they possess and actions they should take. It’s a role to played. It’s something people want to be for themselves and want to be seen to be by others. Many roles only work when recognized by others, e.g. the cool kid.

The second encompasses wants, needs, desires, and expectations. It specifies a progression of events and general trajectory towards a desired state.

The two narratives function as a whole. A person believes that by playing a certain role they will attain the life they want. An example: a 17 year-old with a penchant for biology decides they destined to be a doctor (perhaps there are many in the family); they expect to study hard for SATs, go to pre-med, go to medical school, become a doctor; once they are a doctor they expect to have a good income, live in a nice house, attract a desirable partner, be respected, and be a good person who helps people.

The structure here is “be a doctor” -> “have a good life” and it specifies the appropriate actions to take to live up to that role and attain the desired life. One fails to live up to the role by doing things like failing to get into med school, which I predict would be extremely distressing to someone who’s predicated their life story on that happening.

Roles needn’t be professional occupations. A role could be “I am the kind, fun-loving, funny, relaxed person who everyone loves to be around”, it specifies a certain kind of behavior and precludes others (e.g. being mean, getting stressed or angry). This role could be attached to a simple causal structure of “be kind, fun-loving, popular” -> “people like me” -> “my life is good.”

Roles needn’t be something that someone has achieved. They are often idealized roles towards which people aspire, attempting to always take actions consistent with achieving those roles, e.g. not yet a doctor but studying for it, not yet funny but practicing.

I haven’t thought much about this angle, but you could tie in self-worth here. A person derives their self-worth from living up to their narrative, and believes they are worthy of the life they desire when they succeed at playing their role.

Getting others to accept our narratives is extremely crucial for most people. I suspect that even when it seems like narratives are held for the self, we're really constructing them for others, and it's just much simpler to have a single narrative than say "this is my self-narrative for myself" and "this is my self-narrative I want others to believe about me" a la Trivers/Elephant in the Brain.

Maintaining the narrative

A hypothesis I have is that among the core ways people choose their actions, it’s with reference to which actions would maintain their narrative. Further, that most events that occur to people are evaluated with reference to whether that event helps or damages the narrative. How upsetting is it to be passed over for a promotion? It might depend on whether you have a self-narrative is as “high-achiever” or “team-player and/or stoic.”

Sometimes it’s just about maintaining the how my life will go element: “I’ll move to New York City, have two kids and a dog, vacation each year in Havana, and volunteer at my local Church” might be a story someone has been telling themselves for a long time. They work towards it and will become distressed if any part of it starts to seem implausible.

You can also see narratives as specifying the virtues that an individual will try to act in accordance with.

Narrative Tension

Invariable, some people encounter difficult living up to their narratives. What of the young sprinter who believes their desired future requires them to win Olympic Gold yet is failing to perform? Or the aspiring parent who in their mid-thirties is struggling to find a co-parent? Or the person who believes they should be popular, yet is often excluded? Or the start-up founder wannabee who’s unable to obtain funding yet again for their third project?

What happens when you are unable to play the role you staked your identity on?

What happens when the life you’ve dreamed of seems unattainable?

I call this narrative tension. The tension between reality and the story one wants to be true. In milder amounts, when hope is not yet lost, it can be a source of tremendous drive. People work longer and harder, anything to keep the drive alive.

Yet if the attempts fail (or it was already definitively over) then one has to reconcile themselves to the fact that they cannot live out that story. They are not that person, and their life isn’t going to look like that.

It is crushing.

Heck, even just the fear of it possibly being the case, even when their narrative could in fact still be entirely achievable, can still be crushing.

Healthy and Unhealthy Depression

Related: Eliezer on depression and rumination

I can imagine that depression could serve an important adaptive function when it occurs in the right amounts and at the right times. A person confronted with the possible death of their narratives either: a) reflects and determines they need to change their approach, or b) grieves and seeks to construct new narratives to guide their life. This is facilitated with a withdrawal from their normal life and disengagement from typical activities. Sometimes the subconscious mind forces this on a person who otherwise would drive themselves into the ground vainly trying to cling to a narrative that won’t float.

Yet I could see this all failing if a person refuses to grieve and refuses to modify their narrative. If their attitude is “I’m a doctor in my heart of hearts and I could never be anything else!” then they’ll fail to consider whether being a dentist or nurse or something else might be the next best thing for them. A person who’s only ever believed (implicitly or explicitly) that being the best is the only strategy for them to be liked and respected, won’t even ponder how it is other people who aren’t the best in their league ever get liked or respected, and whether she might do the same.

Depressed people think things like:

  • I am a failure.
  • No one will ever love me.
  • I will never be happy.

One lens on this might be that some people are unwilling to give up a bucket error whereby they’re lumping their life-satisfaction/achievement of their value together with achievement of a given specific narrative. So once they believe the narrative is dead, they believe all is lost.

They get stuck. They despair.

It’s despair which I’ve begun to see as the hallmark of depression, present to some degree or other in all the people I’ve personally known to be depressed. They see no way forward. Stuck.

[Eliezer's hypothesis of depressed individuals wanting others to validate their retelling of past events seems entirely compatible with people wanting to maintain narratives and seeking indication that others still accept their narrative, e.g. of being good person.]

Narrative Therapy

To conjecture on how the models here could be used to help, I think the first order is to try to uncover a person’s narratives: everything they model about who they’re supposed to be and how their life should look and progress. The examples I’ve given here are simplified. Narratives are simple relative to full causal models of reality, but a person’s self-narrative will still have have many pieces, distributed over parts of their mind, often partitioned by context, etc. I expect doing this to require time, effort, and skill.

Eventually, once you’ve got the narrative models exposed, they can be investigated and supplemented with full causal reasoning. “Why don’t we break down the reasons you want to be a doctor and see what else might be a good solution?” “Why don’t we list out all the different things that make people likable, see which might you are capable of?”

I see CBT and ACT each offering elements of this. CBT attempts to expose many of one’s simple implicit models and note where the implied reasoning is fallacious. ACT instructs people to identify their values and find the best way to live up to them, even if they can’t get their first choice way of doing so, e.g. “you can’t afford to travel, but you can afford to eat foreign cuisine locally.”

My intuition though is that many people are extremely reluctant to give up any part of their narrative and very sensitive to attempts to modify any part of it. This makes sense if they’re in the grips of a bucket error where making any allowance feels like giving up on everything they value. The goal of course is to achieve flexible reasoning.

Why this additional construct?

Is really necessary to talk about narratives? Couldn’t I have described just talking about what people want and their plans? Of course, people get upset when they fail to get what they want and their plans fail!

I think the narratives model is important for highlighting a few elements:

  1. The kind of thinking used here is very roles-based in a very deep way: what kind of person I am, what do I do, how do I relate to others and they relate to me.
  2. The thinking is very simplistic, likely a result of originating heavily from System 1. This thinking does not employ a person’s full ability to causally model the world.
  3. Because of 2), the narratives are much more inflexible than a person’s general thinking. Everything is all or nothing, compromises are not considered, it’s that narrative or bust.

This is aligned with my thoughts on the importance of narratives, especially personal narratives.

The best therapists are experts at helping pull out your stories - they ask many, many questions and function as working memory, so you can better see the shapes of your stories and what levers exist to mold them differently.

(We have a word for those who tell stories - storyteller - but do we have a word for experts at pulling stories out of others?)

A related concept in my view is that of agency, as in how much I feel I am in control of my own life. I am not sure what is the cause and what is the effect, but I have noticed that during periods of depression I feel very little agency and during more happy periods I feel a lot more agency over my life. Often, focusing on the things I can control in my life (exercise, nutrition, social activities) over things I can't (problems at work) allows me to recover from depression a lot faster.

What happens when the life you’ve dreamed of seems unattainable?

This can also be a standard, what someone considers a bare minimum, whether it's x amount of good things a, b, and c, or x amount of growth in areas a, b and c.

Great quote from Francis Bacon (Novum Organum Book 2:8):

Don’t be afraid of large numbers or tiny fractions. In dealing with numbers it is as easy to write or think a thousand or a thousandth as to write or think one.

Converting this from a Facebook comment to LW Shortform.

A friend complains about recruiters who send repeated emails saying things like "just bumping this to the top of your inbox" when they have no right to be trying to prioritize their emails over everything else my friend might be receiving from friends, travel plans, etc. The truth is they're simply paid to spam.

Some discussion of repeated messaging behavior ensued. These are my thoughts:

I feel conflicted about repeatedly messaging people. All the following being factors in this conflict:

  • Repeatedly messaging can be making yourself an asshole that gets through someone's unfortunate asshole filter.
  • There's an angle from which repeatedly, manually messaging people is a costly signal bid that their response would be valuable to you. Admittedly this might not filter in the desired ways.
  • I know that many people are in fact disorganized and lose emails or otherwise don't have systems for getting back to you such that failure to get back to you doesn't mean they didn't want to.
  • There are other people have extremely good systems I'm always impressed by the super busy, super well-known people who get back to you reliably after three weeks. Systems. I don't always know where someone falls between "has no systems, relies on other people to message repeatedly" vs "has impeccable systems but due to volume of emails will take two weeks."
    • The overall incentives are such that most people probably shouldn't generally reveal which they are.
  • Sometimes the only way to get things done is to bug people. And I hate it. I hate nagging, but given other people's unreliability, it's either you bugging them or a good chance of not getting some important thing.
    • A wise, well-respected, business-experienced rationalist told me many years ago that if you want something from someone you, you should just email them every day until they do it. It feels like this is the wisdom of the business world. Yet . . .
  • Sometimes I sign up for a free trial of an enterprise product and, my god, if you give them your email after having expressed the tiniest interest, they will keep emailing you forever with escalatingly attention-grabby and entitled subject titles. (Like recruiters but much worse.) If I was being smart, I'd have a system which filters those emails, but I don't, and so they are annoying. I don't want to pattern match to that kind that of behavior.
    • Sometimes I think I won't pattern match to that kind of spam because I'm different and my message is different, but then the rest of the LW team cautions me that such differences are in my mind but not necessarily the recipient tho whom I'm annoying.
    • I suspect as a whole they lean too far in the direction of avoiding being assholes while at the risk of not getting things done while I'm biased in the reverse direction. I suspect this comes from my previous most recent work experience being in the "business world" where ruthless, selfish, asshole norms prevail. It may be I dial it back from that but still end up seeming brazen to people with less immersion in that world; probably, overall, cultural priors and individual differences heavily shape how messaging behavior is interpreted.

So it's hard. I try to judge on a case by case basis, but I'm usually erring in one direction or another with a fear in one direction or the other.

A heuristic I heard in this space is to message repeatedly but with an exponential delay factor each time you don't get a response, e.g. message again after one week, if you don't get a reply, message again after another two weeks, then four weeks, etc. Eventually, you won't be bugging whoever it is.


For my own reference.

Brief timeline of notable events for LW2:

  • 2017-09-20 LW2 Open Beta launched
  • (2017-10-13 There is No Fire Alarm published)
  • (2017-10-21 AlphaGo Zero Significance post published)
  • 2017-10-28 Inadequate Equilibria first post published
  • (2017-12-30 Goodhart Taxonomy Publish) <- maybe part of January spike?
  • 2018-03-23 Official LW2 launch and switching of to point to the new site.

In parentheses events are possible draws which spiked traffic at those times.

Failed replications notwithstanding, I think there's something to Fixed vs Growth Mindset. In particular, Fixed Mindset leading to failure being demoralizing, since it is evidence you are a failure, rings true.