# 154

Disclaimer: Nate gave me some life advice at EA Global; I thought it was pretty good, but it may or may not be useful for other people. If you think any of this would be actively harmful for you to apply, you probably shouldn't.

# Notice subtle things in yourself

This includes noticing things like confusion, frustration, dissatisfaction, enjoyment, etc. For instance, if you're having a conversation with somebody and they're annoying you, it's useful to notice that you're getting a little frustrated before the situation gets worse.

A few weeks ago my colleagues and I wanted to do something fun, and decided to play laser tag at our workplace. However, we couldn't find the laser tag guns. As I began to comb the grounds for the guns for the second time I noticed that I felt like I was just going through the motions, and didn't really expect my search to be fruitful. At this point I stopped and thought about the problem, and realized that I had artificially constrained the solution space to things that would result in us playing laser tag at the office, rather than things that would result in us having fun. So I stopped looking for the guns and we did an escape room instead, which made for a vastly more enjoyable evening.

If you're not yet at the point where you can notice unsubtle things in yourself, you can start by working on that and move up from there.

# Keep doing the best thing, even if you don't have a legible story for why it's good

Certainly the actions you're taking should make sense to you, but your reasoning doesn't have to be 100% articulable, and you don't need to justify yourself in an airtight way. Some things are easier to argue than other things, but this is not equivalent to being more correct. For instance, I'm doing AI alignment stuff, and I have the option of reading either a textbook on linear algebra or E.T. Jaynes' probability theory textbook.

Reading about linear algebra is very easy to justify in a way that can't really be disputed; it's just obviously true that linear algebra is directly and widely applicable to ML. It's harder to justify reading Jaynes to the same level, even though I think it's a pretty sound thing to do (I think I will become better at modeling the world, learn about various statistical pitfalls, absorb Jaynes' philosophical and historical insights, etc.), and in fact a better use of my time right now than learning linear algebra in more depth.

This bit of advice is mostly about not needing to be able to justify yourself to other people (e.g. friends, family) to take the best visible action. However, it is also the case that you might have internalized social pressure such that you feel the need to justify a course of action to yourself in a way that would be legible to other people/justifiable in a social setting. This is also unnecessary.

Relatedly, you don't need to "get" motivation; you can just continue to take the best action you can see.

# Don't go insane

Apparently a good number of people in Nate's social circle have gone insane - specifically, they have taken facts about the world (e.g. the universal prior being malign) as "invitations" to go insane. He also noted that many of these people took LSD prior to going insane, and that this may have "loosened" something in their minds.

This may be a particular danger for people who value taking ideas seriously as a virtue, because they might go full throttle on an idea that conflicts with common sense, and end up insane as a result. When asking a non-Nate for feedback on this post, I was told that some concrete things that people have taken as "invitations" to go insane are: decision theory (specifically acausal trade), things thought while meditating, and the idea that minds are made of "parts" (e.g. sub-agents).

Nate says that the way you avoid this pitfall is that when you hear the "siren call of insanity," you choose to stay sane instead. This seems vaguely reasonable to me, but it's not very crisp in my mind and I don't quite know what it looks like to apply this in practice.

# Reject false dichotomies

Don't epistemically commit to the best option you can currently see, especially not in a way that would permanently alter you/prevent you from backtracking later. For instance, if the only two moral philosophies you're aware of are Christianity and nihilism, and you decide that God doesn't actually exist (and therefore Christianity is obviously wrong), you don't have to go full throttle down the nihilism path.

Don't throw the baby out with the bathwater - in fact, don't lose any part of the baby. If all the epistemic options seem to violate something important to you, don't just blast through that part of your values. Apparently this helps with not going insane.

Nate told me that the most important skill for doing research is not thinking you know things when you actually don't. This is closely tied to noticing confusion. It's also related to "learning (important) things carefully"; for instance, if you're teaching yourself physics, you want to make sure you truly understand the material you're learning, and move at a pace such that you can do that (rather than going through it quickly but haphazardly).

New Comment

I think I might just commit to staying away from LSD and Mind Illuminated style meditation entirely. Judging by the frequency of word of mouth accounts like this, the chance of going a little or a lot insane while exposed to them seems frighteningly high.

I wonder why these long term effects seem relatively sparsely documented. Maybe you have to take the meditation really seriously and practice diligently for this stuff to have a high chance of happening, and people in this community do that often, but the average study population doesn't?

There can also be factors in this community that make people both unusually likely to go insane and to also try things like meditation and LSD in an attempt to help themselves. It's a bit hard to say given that the post is so vague on what exactly "insanity" means, but the examples of acausal trade etc. make me suspect that it's related to a specific kind of anxiety which seems to be common in the community.

That same kind of anxiety also made me (temporarily) go very slightly crazy many years ago, when I learned about quantum mechanics (and I had neither done psychedelics nor had I yet started meditating at the time), and it feels like the same kind of thing that causes the occasional person to freak out about Roko's Basilisk. I think those kinds of people are particularly likely to be drawn to LW, because they subconsciously see rationality as a way to try to control their anxiety, and that same thing causes them to seek out psychedelics and meditation. And then rationality, meditation, and psychedelics are all things that might also dismantle some of the existing defenses their mind has against that anxiety.

I suspect it's related to the fact that we've gotten ourselves off-distribution from the emergencies that used to be common, and thus AI and the Singularity are interpreted as immediate emergencies when they aren't.

I'll also make a remark that LW focuses on the tails, so things tend to be more extreme than usual.

Yeah, I think people who are high in abstract thinking and believing their beliefs and anxious thought patterns should really stay away from psychedelics and from leaning too hard into their run-away thought trains. Also, try to stay grounded with people and activities that don't send you off into abstract thought space. Spend some time with calm normal people who look at the world in straightforward ways, not only creative wild thinkers. Spend time doing hobbies outdoors that use your physical body and attention in satisfying ways, keeping you engaged enough to stay out of your head.

[-][anonymous]8mo30

people who are high in abstract thinking and believing their beliefs and anxious thought patterns

I think one who this description fits can avoid any risks of 'going insane' while still using their abilities for good. For example, in my own case (I think the first two describe me, and the third one sort-of does), if I were to apply these suggestions..

try to stay grounded with people and activities that don't send you off into abstract thought space. Spend some time with calm normal people who look at the world in straightforward ways, not only creative wild thinkers. Spend time doing hobbies outdoors that use your physical body and attention in satisfying ways, keeping you engaged enough to stay out of your head.

then my creative output related to alignment would probably drop significantly.

(I agree with not trying psychedelics though. Even e.g nootropics and adhd meds are things I'm really cautious with, cause I don't wanna mess up some part of my process.)

For anyone reading this post in the future, I'd instead suggest doing things meant to help you channel your ability: being conscious and reflective about your thoughts, revisiting basic rationality techniques and theory occasionally, noticing privileged hypotheses (while still allowing yourself to ponder them if you're just doing it because you find it interesting; I think letting ones mind explore is also important to generating important ideas and making connections).

"Please don't throw your mind away" in this other sense of counteracting your tendency to think abstractly; you might be able to do a lot of good with it.

I think your suggestions are good as well. To be clear: I didn't mean that I think one should spend a large fraction of their time just 'staying grounded'. More like, a few hours a week.

The way I model attention is that it is (metaphorically) a Cirrus (biology) of thought that you extend into the world and then retract into your mind. If you leave it out for too long, it gets tangled up in the forest of all knowledge, if you keep it inside for too long, then you become unable to respond to your environment.

People who are extremely online tend to send their attention cirrus into the internet, where it is prone to become a host to memes that use addiction to bypass your mind's typical defenses against infection.

Anything that you really enjoy to the point of losing self-control comes under the category of being a disease: whether that's social media, programming, fiction, gaming, tentacle pornography, research, or anime.

Even if they were somehow extremely beneficial normally (which is fairly unlikely), any significant risk of going insane seems much too high. I would posit they have such a risk for exactly the same reason -when using them, you are deliberately routing around very fundamental safety features of your mind.

The MBSR studies are two-month interventions. They are not going to have the same strong effects as people meditating seriously for years.

On the other hand, those studies that investigate people who meditate a lot are often from a monastic setting where people have teachers which is quite different from someone meditating without a teacher and orienting themselves with the Mind Illuminated.

Possible selection effect?

Maybe meditation moves people in a random direction. Those who get hurt, mostly stop meditating, so you won't find many of them in the "meditating seriously for years" group.

[+]Viliam2y-8-5

I read advice with different eyes since I read Recommendations vs Guidelines on SSC. I tried to think of or find the guidelines that put the advice into perspective. Let's try this here:

Notice subtle things in yourself... unless you notice very many things in yourself already or tend to jump to conclusions.

Keep doing the best thing, even if you don't have a legible story for why it's good. But not if you suffer from it (see also Don't go insane) or if you have other strong evidence against it - which you may find by researching advice.

Don't go insane, but don't worry too much about it either. A healthy environment should be prevention enough.

Reject false dichotomies... as soon as it comes clear that they are false dichotomies. Until then, you may entertain both options (maybe weighted by independent evidence).

If I'd go towards making the recommendations vs guidelines, and following the law of equal and opposite advice, I'd do the following:

Notice subtle things in yourself, unless you tend to jump to conclusions.

Keep doing the best thing, even if you don't have a legible story for why it's good. But beware illegible impact, as it's very easy to be overoptimistic and illegible impact can't be held to account since it can't be verified by somebody not themselves, thus groups above a certain size should enforce legible impact to keep things honest and not fall prey to overoptimism.

Don't go insane, but don't overworry about insanity. A healthy environment is prevention enough.

Reject false dichotomies... as soon as it comes clear that they are false dichotomies. Until then, you may entertain both options (maybe weighted by independent evidence).

Wow, excellent advice all around. I’ve gone insane in exactly that way a few times, but later I learned that I have bipolar that gets triggered by stress and/or psychedelics. During the manic phase the mind runs away with whatever it’s thinking / obsessing about. Maybe that could potentially explain some of the other people too.

[-]Ruby2y134

Thank you! I strongly approve of people writing up helpful-seeming advice they receive. Seems like a good way to amplify the positive effect of the advice-giver's time (though getting their permission/approval before posting is probably a good idea – I assume this post had it, but mentioning for the sake of others).

Could someone provide some color on what "insanity" refers to here? Are we talking about O(people becoming unproductive crackpots), or O(people developing psychosis)?

More the second one. Plus runaway anxiety spirals and depression.

Mmmm, I'm reasonably close to Nate's social circles and I would've guessed he meant more the former than the latter (though nonzero the latter as well).

Good point. To be more clear, I should say maybe he means the former, but I'd like to say that I agree with this post more in the latter sense. The latter being perhaps not more probable but having larger magnitude, thus a scarier negative EV.

Funny enough, I feel like understanding Newcomb's problem (related to acausal trade) and modeling my brain as a pile of agents made me more sane, not less:

- Newcomb's problem hinges on whether or not I can be forward predicted.  When I figured it out, it gave me a deeper and stronger understanding of precommittment.  It helps that I'm perfectly ok with there being no free will; it's not like I'd be able to tell the difference if there was or wasn't.

- I already somewhat viewed myself as a pile of agents, in that my sense of self is 'hivemind, except I currently only have a single instance due to platform stupidity'.  Reorienting on the agent-based model just made me realize that I'm already a hivemind of agents, and that was compatible with my world view and actually made it easier to understand and modify my own behaviour.

This seems similar to a post-rat perspective in some ways. Lot of stuff about prioritizing some wellbeing over being consistent.

Also, realizing that it all adds up to normality. Learning about quantum physics, or decision theory, or the mind being made of sub-agents, should not make you do crazy things. Your map has changed, but the territory remains the same as it was yesterday. If your sub-agents were able to create a sane individual yesterday, it should be achievable today and tomorrow, too.

Only in the low-technological regime. If the high-end technology regime matters for any reason, it does not add up to normality, but to extremes. A great example of this is Pascal's mugging, where the low chance of arbitirarily high computational power is considered via wormholes that exist in black holes thanks to the solution to the black hole information paradox that solely uses general relativity and quantum mechanics. Heres the link: https://www.quantamagazine.org/the-most-famous-paradox-in-physics-nears-its-end-20201029/

Now I would agree that if we could halt technological progress, Pascal's mugging is irrelevant. But it's unlikely to happen, unless we go extinct. Thus reality does not add up to normality, but gets ever more extreme in the long run.

Or short version, in the long run, extremism about reality, not normality prevails in the end.

[This comment is no longer endorsed by its author]Reply

Can you explain how the discovery you've linked demonstrates "arbitrarily high computational power". I've tracked down some of the papers they're talking about but haven't been able to find this claim.

It's very possible that this is because I've missed something obvious in the article.

I'll retract the comment for now, as it was admittedly excited speculation, not fact.

A note about psychedelics: Kaj Sotala has presented evidence that psychedelics are much safer than the comments say, so the claim that psychedelics are dangerous needs to be much weaker than commenters are presenting it as.

This is not necessarily the case. Means may be different from medians may be different from tails, various populations might be at higher risk, rare downsides might be large enough to make up for their rarity, etc.

"One guy has presented evidence that I won't even link, so this post should be weakened" is not a sound principle, either.

Marijuana is not what people intend when they say "psychedelics." For other readers who are confused: these links seem to be about LSD and psilocybin.

Alright, I'll edit that to say psychedelics.

Worth also reading the intro section of the first paper for more references:

Over 30 million people living in the US have used lysergic acid diethylamide (LSD), psilocybin (magic mushrooms), and mescaline (peyote and other cacti) [4]. Common reasons for using psychedelics include mystical experiences, curiosity, and introspection [5]. The classical serotonergic psychedelics are not known to cause damage to the brain or other organs of the body, or cause withdrawal symptoms, elicit addiction or compulsive use [3], or cause birth defects or genetic damage [6]. Psychedelics often elicit deeply personally and spiritually meaningful experiences and sustained beneficial effects [7][12]. Psychedelics can often cause period of confusion and emotional turmoil during the immediate drug effects [13] and infrequently such adverse effects last for a few days after use. Psychedelics are not regarded to elicit violence [14] and dangerous behavior leading to suicide or accidental death under the influence of psychedelics is regarded as extremely rare [15]. LSD and psilocybin are consistently ranked in expert assessments as causing less harm to both individual users and society than alcohol, tobacco, and most other common recreational drugs [16][19]. Given that millions of doses of psychedelics have been consumed every year for over 40 years, well-documented case reports of long-term mental health problems following use of these substances are rare. Controlled studies have not suggested that use of psychedelics lead to long-term mental health problems [8], [9], [13], [20]

I'm under the impression that this kind of summary is a reasonably fair characterization of the prevailing view among researchers.

[Edited to add:] I should probably clarify that I'm definitely not saying that psychedelics would be entirely safe or risk-free, especially not when used by a population that seems to have additional risk factors that are overrepresented relative to the general population. I was just pointing out that some of the more hyperbolic statements of "100/101 persons who try psychedelics fuck up their lives" were a bit, well, hyperbolic. If you're considering using, do at least get familiar with the risks and follow responsible use protocols (e.g. 1, 2).

I was just pointing out that some of the more hyperbolic statements of "100/101 persons who try psychedelics fuck up their lives" were a bit, well, hyperbolic.