This is a dialogue Elizabeth and I wrote about our journeys trying to live well while caring about the world not ending. We used a chess timer to govern who was talking at a given time and to keep the pace snappy and time-boxed.
We talk about separating whether we were worthy of existing from if we were helping with x-risk, relating to non-x-risk culture and also what concrete things are part of our conception of the good life.
My thinking on this has been influenced a lot by something @Raemon said: Imagine an all loving but ineffectual god. This god loves you no matter what, even if you sit in a burning house while it collapses. But his love will not save you from the consequences of your house burning to the ground.
[we were both raised Christian but neither are now, God being real is not a crux for this to work for me or for other people I've told it to]
This helped me get in a frame where working to avert doom was a choice, with consequences, but not in and of itself a moral act necessary to prove myself a worthy being. God would love me even if I did nothing but play video games. But love wouldn't provide me a house.
I am hearing something related to decoupling my self-worth from choosing to act in the face of x-risk (or any other moral action). Does that sound right?
self-worth isn't the exact right label but I think you've got the basic idea
It reminds me that recently, I think I've shifted from a lot of my motivation to work on x-risk being about being special, and contributing to the arc of the world, to actually just wanting good futures happening. I think this came after a period where for the first time I actually felt free not to work on x-risk, because I wouldn't be bad or worthless even if I didn't.
Yeah I feel like this is the same genre.
Can I ask how you came to that revelation, that you weren't bad if you didn't work on x-risk?
So I think this came after my collapse in the belief in what I was doing at work. The collapse had multiple parts: the collapse that what I was doing was helping the org, that the leadership had some sort of qualitatively different insight into strategy than I did, and the collapse in belief in the object-level research directions [this part is somewhat misnamed, but I think it's a fine pointer for now].
After that, I was like "oh, this thing I've been angling to do for about a decade basically happened and then was not good. I guess I really can't go backwards", and then I guess I rebuilt some meaning structures. I think it helped that I became much freer to have a low P(doom) than feeling obliged to have a high one
One reason I think it needs to be okay on a moral level to not be actively doing good is that it needs to be okay on a practical level to not have been doing good. We get better results if people are able to say "oh, that wasn't working, I'll stop", and not immediately need to start a new thing that's definitely going to work. There may be nothing to do about the burning house and it's important you be allowed to recognize that fact.
One thing I think is weird is that I've seen a few people (including me) go from "ambition for social reasons" to "ambition for internal reasons", which might indicate the transition was fake, but I don't think it (always) is.
[one of my favourite old pieces of rationality lore is Eliezer writing something like ~"I noticed trying to believe that asking for donations at the singularity summit wouldn't be good because it would have made not asking at previous ones a bad idea. but then I explicitly noticed that and was like 'actually that implies low-hanging fundraising fruit'"]
I definitely it seems worth tracking that the transition might have been fake!
But also the chance that it isn't. I think one reason people don't want to disentangle their motivational structure is they're worried they'll give up something they care about. If "you'll go through this long process and still want the things you actually want, with a better approach and less baggage" is a live option, I think people will be more willing to look into the abyss.
But also being able to get that good result is dependent on having the option to really change your mind, and that's helped a lot by having an imaginary god that loves you no matter what the outcome. Like, Christians have some good tech here.
Yeah totally. I'm not sure I would specifically recommend to people trying to solve the social motivation thing. I think that I would like people to have full and flourishing lives that would be good for them even if they decide to change their life track. I think that a lot of normie lifepaths have some wisdom about being 'antifragile'-ish in this way to changing beliefs
Like, Christians have some good tech here.
Heh, I hadn't read this part when I wrote the thing about 'normie lifepaths'
I also have a "social motivation is fine" soapbox. I think the biggest danger is people choosing a goal for social reasons and then, because they're in an environment that shames social motivation, fake internalizing it.
Oh another thing that helped me a lot here: I stopped noticing people were cool and being like "gahhhh I need to be at least as cool as them in the dimension of their coolness" and started being more like "that person seems cool! that's nice. Anyway, what is the next step in the actualisation of kave [or some other frame of my personal growth and development]". Planecrash helped a lot here
I like the point about shaming social motivation! I haven't thought about it quite that way before
Woah, how did planecrash specifically help? Like I was prepping to try and tease out whatever illegible things set the stage for that realization and then you were like "bam, fanfiction"
Yeah, so I think I got different things out of Planecrash than many people. I think two main strands were really helpful for me:1. Dath Ilan is pretty explicitly into this 'you are you, how can you grow from there and not surrender your you-ness [or at least view it as a great sacrifice]?', and I think Dath Ilan kind of felt like a superstructure that loved me and wanted good things for me, similar to the loving God you mentioned.2. I had gained more freedom to disagree with AI thinkers I respected, and every time I noticed Eliezer making a claim that I disagreed with I felt like I regained some mana (despite the fact I continue to like the man and his thought!)
One thing that helped me a lot was going to an old friend's wedding weekend. He's... uh, normie isn't correct at all, but in a subculture with very different values. And after one dinner it snapped into place for me that I wasn't stupid or lazy; I had selected for hanging out with the smartest, most ambitious people who would tolerate me, and succeeded admirably. By any normal standard I was quite smart and doing cool things.
And then I looked at the people I admired/envied feeling actively bad that they weren't Elon Musk, and it clicked.
That was 5 years ago and it seems to be a stable solution.
I had kind of an obverse experience, where I went to hang out with my elite-ish normie friends when I was a bit burned out. And I was like 'huh, these people are as good at boggling as me and they have more energy than me and they care about the Latest News but they seem free to have thoughts even though they care about that'. I think that made me freer not to grind myself into parts of trying to be an x-risker that weren't working for me, and actually consider that the habits around me were failing me rather than I them
Also worth sharing a friend's anecdote. He was very religious (and intellectually so) until he found a new social outlet, and could see a future without God but with social support. And the same outlet was there when he got disillusioned with a certain part of rationality. I think having social and other support not dependent on people viewing you as Valuable To The Cause is very helpful, although TBH I've not been great at following that advice.
Yeah, I've seen a pattern with a couple of friends that they regretted losing ties to people who liked them completely independently of their Value To The Cause (nothing that seems easy to say without being a bit too personal).I'd be curious to hear how you orient on a concrete level to the good life and x-risk now you've moved through those stages?
To be a bit more specific: what does the good life look like? What things comprise it and what tradeoffs if any do you make with it and your do-gooding life?
This is tricky, because a lot of the things we call "the good life" are I think side effects of the actual good life, and can't be pursued directly or will fail if they are.
Some specific things I think are important to me:
("can't be pursued directly or will fail if they are" 🔥)
Thanks! This is pretty helpful for me to see. I think one aspect that I find quite interesting is the interplay between the good life and the professional life. I think it would be easy to imagine that one's do-gooding life and one's professional life together form one magisterium and one's good kife forms a separate one. But, for me as well, some of the ways I want to shape my work life feel generated from the scent of the good life more than anything else
Some other things that intersect with your list but aren't necessarily the same as elements of it:
Oh man, financial security is so good, A++ would recommend.
I think one of its biggest benefits is that it frees you up to take more risks. You can leave an abusive job, or launch a really speculative project, or say unpopular things that might get you fired.
Not to mention chase down speculative medical care, buy a nicer apartment with nicer furniture, have that car that's so important to me...
(I'm not a travel person but it seems really important to the rest of you)
All of this prompts in me something I'm not settled in yet: I think a lot of the things that I care about in the good life is fairly standard stories that the overculture tells me are worth striving for. There are at least 3 hypotheses for why that would be the case: (1) the overculture has metis about how to be a human, (2) things that rich fancy people do are good because they get to pick from a broader selection than the rest of us and they pick the best stuff and (3) the overculture is telling me this stuff for reasons unrelated to their goodness-for-me (like them being some sort of attractor state or because me pursuing those things would benefit the overculture).For example, I think there is something very appealing about the standardish suburban lives that some of my child-having friends are pursuing. But some local folks have suggested "that particular picture is very specific and expensive and draws a lot of power by rooting into deep and important things, like raising families, but the appeal is stolen from that valuable thing".I don't know if that generates any reactions in you? Fine if not
I think this is one of those weird things where social pressure can direct you towards the right thing but corrupt your internal prioritzation process in ways that kind of ruin it.
[I have an anecdote for this but the best one feels a little tangential]
Can you unpack your first sentence a little? A couple more sentences would help me understand I think
I think there's a wide swath of things that some people legitimately want, and would continue to want in the absence of social pressure. But the social pressure gets to them first, so miss out on:
E.g. going to college because you've thought about it, know what you want, and have a metaplan for updating the plan as you get more information, versus going because you're supposed to.
Or, god forbid, having kids because you're supposed to, without thinking about how to make that situation good for you and them. I think people are way more likely to e.g. accept a shitty partner when they're succumbing to social pressure to have kids, even if they would have, in a vacuum, developed a strong internal drive to have kids. The social pressure makes them less responsive to specifics.
Nice, I like this model quite a bit! I do think one thing that is good about these standard social narratives is that they encourage you to make tradeoffs, or move forward and not get stuck on perfection or analysis paralysis. (Your comment about accepting a shitty partner made me think this, though I don't think people should accept shitty partners! But I do think my friends who are more likely to have romantic loneliness that they regret followed less traditional life paths and I think practiced more selection and less cultivation in their romantic relationships).
I conjecture that while social pressure may push people to make trade-offs, they rarely pressure people to think about trade-offs in a nuanced and strategic way. And sometimes that works out, because social pressure will probably be towards the solution that works more often or creates positive externalities But it will miss individual variation.
I posit that even if it's true that normal people are happier than weird people, it doesn't follow that you can make weird people happy by having them act normal (much less pressuring them to act normal). The way out is mostly through.
But which parts of standard dreams make them ache are good guides to figuring out what they want.
Overall, I think this seems like a pretty wise take. I would like to, in the future, think in more detail about when and whether weird folks can be helped by acting more normal, and generating the thesis of Reformed Normieism, rather than the repressive Orthodox Normieism
Yeah I think this is a great topic. I have a bunch of half-written blog posts on "so you've decided your organization needs to be Weird: what now?"
I'm really curious about the question of how to go about one'd life now if you take AI risk seriously. I'm excited to read this article and really liked Zvi's AI: Practical Advice for the Worried. Anyone have other articles/resources/strategies on this topic that they recommend?
I feel like this pairs pretty well with the concept of the inner child in psychology, where you basically give your own "inner child", which represents your emotions and very basic needs, a voice and try to take care of it. But on a higher level you still make rational decisions. In this context it would basically be "be your own god" I suppose? Accept that your inner child is scared of x-risk, and then treat yourself like you would a child that is scared like that.
Its kind of interesting how you focus on the difference between inner needs and societal needs. Personally I have never felt a big incentive to follow societal needs, and while I can not recommend that, does not help mental health, I do not feel the x-risk as much as others. I know its there, I know we should work against it and I try to dedicate my work to fighting that, but I dont really think about it emotionally?
I personally think a bit along the lines of "whatever happens happens, I will do my best and not care much about the rest". And for that its important to properly internalize the goals you have. Most humans main goal is a happy life somehow. Lowering x-risk is important for that, but so is maintaining a healthy work-life balance, mental health, physical health... They all work towards the big goals. I think thats important to realize on a basic level.
And lastly, two more small questions, what is wave and planecrash? And how do you define normie, I feel like thats kind of a tough term.
I suspect Wave refers to this company: https://www.wave.com/en/ (they are connected to EA)
Planecrash is a glowfic co-written by Yudkowsky: https://glowficwiki.noblejury.com/books/planecrash
For Planecrash / Project Lawful there's a LW writeup here.
I define "normie" as "not of the relevant subculture", which changes depending on the speaker and context
I suppose that could be defined as being further away from the self in their own world view than a certain radius permits? That makes sense. I have mostly seen this term in 4chan texts tbh, which is why I dislike it. I feel like normie normally refers to people who are seen as "more average" than oneself, which is a flawed concept in itself, as human properties are too sparse
I guess it can be seen as some more specific data, like world view in terms of x-risk or political, in which case our two protagonists here care about it more than average and the distance to the mean is quite far. In general I would be careful with the word normie tho.
Got a suggestion for an alternative? I also don't love the negative connotation of "normie"
Well, as the category we want to describe here simply does not exist, or is more like a set of people outside your own bubble, which is more like a negated set than a clearly definable set, there are a few options.
Firstly, maybe just "non-science" person, or "non-AI" person. Defining people by what they are not is also not great tho.
Secondly, we could embrace the "wrongness" of the avergae person and just say... average person. Still wrong, but at least not negative. And probably the correct meaning gets conveyed, which is not assured with the first one.
The last, probably most correct but also impractical one is to simply name what aspect you refer to. In this case probably "people who do not follow x-risks" would be most accurate.
But I despise getting told what to call certain groups because someone could get butthurt a bit, so personally I stick with average person - just with the knowledge the average person does not exist and if I think the other person doesnt know, I convey that.