There is an idea that I’ve sometimes heard around rationalist and EA circles, that goes something like “you shouldn’t ever feel safe, because nobody is actually ever safe”.
Wait, really?! If this is true then I had severely overestimated the sanity minimum of rationalists. The objections in your post are all true, of course, but they should also pop out in a sane person's mind within like 15 seconds of actually hearing that statement...
It's the kind of thought that one might have if they have a (possibly low-grade) anxiety issue: you feel anxious and like the world isn't safe and you need to be alert all the time, so then your mind takes that observation as an axiom and generates intellectual reasoning to justify it. And I think there's a subset of rationalists who were driven to rationality because they were anxious; Eliezer even has an old post suggesting that in order to be really dedicated to rationality, you need to have undergone trauma that broke your basic trust in people:
Of the people I know who are reaching upward as rationalists, who volunteer information about their childhoods, there is a surprising tendency to hear things like: "My family joined a cult and I had to break out," or "One of my parents was clinically insane and I had to learn to filter out reality from their madness."
My own experience with growing up in an Orthodox Jewish family seems tame by comparison... but it accomplished the same outcome: It broke my core emotional trust in the sanity of the people around me.
Until this core emotional trust is broken, you don't start growing as a rationalist. I have trouble putting into words why this is so. Maybe any unusual skills you acquire—anything that makes you unusually rational—requires you to zig when other people zag. Maybe that's just too scary, if the world still seems like a sane place unto you.
Or maybe you don't bother putting in the hard work to be extra bonus sane, if normality doesn't scare the hell out of you.
In retrospect, it's too surprising that people might get anxious and maladaptive thought patterns if "normality scares the hell out of" them.
Re "they should also pop out in a sane person's mind within like 15 seconds of actually hearing that statement" I agree with that in the abstract; few people will say that a state of high physiological alertness/vigilance is Actually A Good Idea to cultivate for threats/risks not usefully countered by the effects of high physiological alertness.
Being able to reason about that in the abstract doesn't necessarily transfer to actually stopping doing that. Like personally, I feel like being told something along the line of "you're working yourself up into a counterproductive state of high physiological alertness about the risks of [risk] and counterproductively countering that with incredibly abstract thought disconnected from useful action" is not something I am very good at hearing from most people when I am in that sort of extraordinarily afraid state. It can really feel like someone wants to manipulate me into thinking that [risk] is not a big deal, or discourage me from doing anything about [risk], or that they're seeking to make me more vulnerable to [risk]. These days this is rarely the case; but the heuristic still sticks around. Maybe I should find its commanding officer so it can be told by someone it trusts that it's okay to stand down...
With the military analogy; it's like you'd been asked to keep an eye out for a potential threat, and your commanding officer tells you on the radio to get on REDCON 1. Later on you hear an unfamiliar voice on the radio which doesn't authenticate itself, and it keeps telling you that your heightened alertness is actually counterproductive and that you should stand down.
Would you stand down? No, you'd be incredibly suspicious! Interfering with the enemy's communication is carte blanche in war. Are there situations where you would indeed obey the order from the unfamiliar voice? Perhaps! Maybe your commanding officer's vehicle got destroyed, or more prosaically, maybe his radio died. But it would have to be in a situation where you're confident it represents legitimate military authority. It would be a high bar to clear, since if you do stand down and it was an enemy ruse, you're in a very bad situation regardless if you get captured by the enemy or if you get court-martialed for disobeying orders. If it seems like standing down makes zero tactical/strategic sense, your threshold would be even higher! In the extreme, nothing short of your commanding officer showing up in person would be enough.
All of this is totally consistent with the quoted section in OP that mentions "Goals and motivational weightings change", "Information-gathering programs are redirected", "Conceptual frames shift", etc. The high physiological alertness program has to be a bit sticky, otherwise a predator stalking you could turn it off by sitting down and you'd be like "oh, I guess I'm not in danger anymore". If you've been successfully tricked by a predator into thinking that it broke off the hunt when it really was finding a better position to attack you from, the program's gonna be a bit stickier, since its job is to keep you from becoming food.
To get away from the analogies, I really appreciate this piece and how it was written. I specifically appreciate it because it doesn't feel like it is an attempt to make me more vulnerable to something bad. Also I think it might have helped me get a bit of a felt sense shift.
To get away from the analogies, I really appreciate this piece and how it was written. I specifically appreciate it because it doesn't feel like it is an attempt to make me more vulnerable to something bad. Also I think it might have helped me get a bit of a felt sense shift.
Thank you for sharing that, I'm happy to hear it. :)
I want to mention here that the war example is an example of where there is an adversarial scenario, or adversarial game, and applying an adversarial frame is usually not the correct decision to do, and importantly given that the most perverse scenarios usually can't be dealt with without exotic physics due to computational complexity reason, you usually shouldn't focus on adversarial scenarios, and here Kaj Sotala is very, very correct on this post.
This logic can be taken too far - I don't see the point of feeling constantly anxious -, but at least on an intellectual level, I think it does make a certain amount of sense. It's hard to notice the insanity or inadequacy of the world until it affects you personally. Some examples of this:
Agree. (I'm not saying that losing one's trust in civilizational adequacy is necessarily a bad thing on net, just that it can also lead to some maladaptive thought patterns.)
I do expect some of the potential readers of this post to live in a very unsafe environment - e.g. parts of current-day Ukraine, or if they live together with someone abusive - where they are actually in constant danger.
I live ~14 kilometers from the front line, in Donetsk. Yeah, it's pretty... stressful.
But I think I'm much more likely to be killed by an unaligned superintelligence than an artillery barrage.
Most people survive urban battles, so I have a good chance.
And in fact, many people worry even less than I do! People get tired of feeling in danger all the time.
You shouldn’t ever feel safe, because something bad could happen at any time. To think otherwise is an error of rationality.
I'm curious, do you hear this as often from those with the emotional literacy to usefully differentiate "think" or "assume" from "feel"?
Usually there's little harm done from failing to clearly differentiate assumptions from feelings, but this is an interesting edge case where the framing "you should never assume you're totally safe" seems obviously useful and correct, but it's easy to conflate with the obviously unhelpful and incorrect "you should never feel safe".
Good question, I think often there's been a failure to differentiate going on. Though it's been quite a while since I spoke to some of the people I was thinking of so my recollection of them might be misleading (and others I've only heard about through second-hand accounts).
Agreed. Honestly this feels like one of those Bell curve memes, where most people would know perfectly well at a gut level what "safe" means, then rationalists tried being disruptive and provocative by suggesting a seeming deviation from common sense ("you are actually always unsafe!"), and then we get the explanation in rationalist terms of precisely what other people instinctively do when deciding whether to feel safe or unsafe.
Which isn't necessarily a bad thing: examining your unconscious assumptions and elevating them to the conscious level is good!
Nor do I necessarily agree with the average risk level that people seem to consider as safe. I was particularly frustrated by how COVID was declared solved essentially not by lowering the risk past what the most obvious measure (the vaccine) could do, but by raising the risk tolerance of everyone via shaming and peer pressure (roughly speaking, calling everyone who didn't go along with it a boring party pooper). But disagreement on the specific level of course doesn't change the fact that there has to be a level. I can't be at all times as aware and on high readiness as I would be if I was being stalked by a psycho axe murderer, or my worst enemy would neither be COVID nor the axe murderer: it would be the inevitable aneurysm or heart attack I'd get out of sheer stress.
Is "driving a car (especially in bad road conditions)" a situation in which some degree of feeling unsafe is useful?
I'd say kind of... you definitely have to keep your attention and wits about you on the road, but if you're relying on anxiety and unease to help you drive, you're probably actually doing a bit worse than optimal safety - too quick to assume that something bad will happen, likely to overcorrect and possibly cause a crash.
Adding onto this, an important difference between "anxiety" and "heightened attentiveness" is that anxiety has a lot to do with not knowing what to do. If you have a lot of experience driving cars and losing traction, and life or death scenarios, then when it happens you know what to do and just focus on doing it. If you're full of anxiety, it's likely that you don't actually have any good responses ready if the tires do lose traction, and beyond not having a good response to enact you can't even focus on performing the best response you do have because your attention is also being tugged towards "I don't have a good way to respond and this is a problem!".
I don't have a driving license so this isn't a situation I'd have personal experience with, but I imagine that it would be useful to have some degree of unsafeness to focus your attention more strongly on the driving.
I think there's also a constructive kind of "not feeling totally safe" where you know that the future is unknown and you could lose the things you have and it is worth both putting in some effort to make that less likely and to cherish and enjoy what you have now. But yeah, it shouldn't be a high-alert state, and I'm not really sure how to better describe the thing that it is instead.
There's some good advice in there, but I don't much like the framing about how one should feel, as opposed to how one should think about risks.
The truth is, nobody's actually perfectly safe, ever. The likely outcome for every current individual is eventual death. One can have a reasonable belief that it's a long way off, and that there's even some chance for it to be a VERY long way off. And there are much shorter-term risks as well, some of which can be mitigated, and some can't. How one feels about that is less important than how one integrates it into their framework for action.
Thinking about and internalizing the threat imminence continuum idea is good. It probably does lead to better emotional stability - not "feeling safe", but "accepting and mitigating risks", but it's not directly based on feelings, it's upstream.
Feeling unsafe is probably not a free action though; as far as we can tell cortisol has a deleterious effect on both physical health & mental ability over time, and it becomes more pronounced w/ continous exposure. So the cost of feeling unsafe all the time, particularly if one feels less safe/more readiness than the situation warrants, is to hurt your prospects in situations where the threat doesn't come to pass (the majority outcome).
The most extreme examples of this are preppers; if society collapses they do well for themselves, but in most worlds they simply have an expensive, presumably unfun hobby and inordinate amounts of stress about an event that doesn't come to pass.
Yeah, things close to full-blown doomsday doesn't happen very often. The most common is probably literal war (as in Ukraine and Syria) and the best response to that on an individual level is usually "get the hell away from where the fighting is." Many of the worst natural disasters are also best handled by simply evacuating. If you don't have to/didn't have time to evacuate and you don't die in the immediate aftermath, your worst problems might be the local utilities shutting down for a while and needing to find alternative sources of water and heat until they're fixed.
The potential natural disasters for which I think doomsday-level prepping might actually make a difference are volcanoes and geomagnetic storms, because they could cause problems on a continent-wide or global scale and "go somewhere unaffected" or "endure the short-term disruptions until things go back to normal" might not work. Volcanoes can block the sun and cripple global agriculture, and a giant electromagnetic pulse could cause enough damage to both the power grid and to natural gas pipelines that it could take years to rebuild them. (Impacts from space might also be on the list, depending on the severity.)
Is your model that our thoughts come first, and feelings second?
I think that there are cases where that's true, but that generally our emotional state exerts a strong influence on what kinds of thoughts we're capable of having. So feeling safe (or at least not feeling unsafe) may be a prerequisite for being able to think clearly about risks.
(Though this gets complicated because there are influences going in both directions - if I thought that intellectual ideas had zero influence on feelings, it would have been pointless for me to write this post.)
Is your model that our thoughts come first, and feelings second?
Not exactly - there's more feedback loop than that. I fully agree with "this gets complicated".
I would say that intentional changes to mind-state tend to be thoughts-first. I don't know if that's tautological from the nature of "intentional", but it does seem common enough to make it the best starting point for most people.
Right, that makes sense.
And to clarify, as I tried to say in the introduction, the post is mostly intended to counter the thought that "I shouldn't feel safe". So if someone is having thoughts that it's wrong to feel safe and they should stop doing so, then the intent of the post isn't to say "here's how you should feel". Rather, it's just to say "if you do feel safe, I don't think you need to take a metaphorical hammer and hit yourself with it until you feel unsafe (nor do you need to believe people who say that you should); here's why I think you can stop doing that".
So I think that if you are saying that one should focus on how they think about risks, and I'm saying that here's one way to think about them, then we agree?
Downvoted because there is no « disagree » button.
I strongly disagree with the framing that one could control their emotions (both from the EA quote and from OP). I’m also surprised that most comments don’t go against the post in that regard.
To be specific, I’m pointing to language like « should feel », « rational to feel » etc.
As the other comment pointed out, I'm not assuming that one could control their emotions - I actually lean towards thinking that attempts to control one's emotions are often harmful, though of course there's also a place for healthy emotion regulation.
To be specific, I’m pointing to language like « should feel », « rational to feel » etc.
This clarification seems relevant.
Also in general, I don't think that considering some feelings more rational than others requires an ability to control one's feelings. A feeling can be instrumentally rational if it helps bring about the kinds of outcomes the person cares about, and epistemically rational if the implicit beliefs it's based on are correct ones. That can be true regardless of how much control we have over our feelings.
Of course, if we had absolutely no influence over our feelings, then this might be pointless to talk about. But people can certainly do things that affect their feelings, from listening to music that puts them in a certain mood to (more relevant for this post) telling themselves that they are wrong to feel safe. Also, while controlling feelings is impossible, it's possible to bring light to beliefs underlying the feelings and to update incorrect beliefs that some of the feelings might be based on (I discussed that in this post among others).
Post says that explicitly:
Note that I only intend to dispute the intellectual argument that these are making. It’s possible to accept on an intellectual level that it would make sense to feel safe most of the time, but still not feel safe
I think it's absolutely sensible to believe there are emotions that we shouldn't feel, as in, we have no benefit from feeling and we don't want to feel. I don't want to feel sudden homicidal anger, or have suicidal thoughts, or be afraid of leaving my own room. All those are possible feelings that I definitely believe I should not feel, and will do my best to remove if I ever have them! Of course that's not easy, but the notion that all feelings are equally good by virtue of simply being feelings and thus there is no "should" that applies to them is ridiculous.
It's not that it's "necessarily good and something you should act on" just because that's what you feel, it's that it's not "necessarily bad and something you shouldn't feel" just because that's what you think. Maybe, and maybe, but you're always going to be fallible on both fronts so it makes sense to check.
And that is actually how you can make sure to "not feel" this kind of inappropriate feeling, by the way. The mental move of "I don't want to feel this. I shouldn't feel this" is the very mental move that leads people to be stuck with feelings which don't make sense, since it is an avoidance of bringing them into contact with reality.
If you find yourself stuck with an "irrational" fear, and go to a therapist saying "I shouldn't feel afraid of dogs", they're likely to suggest "exposure therapy" which is basically a nice way of saying "Lol at your idea that you shouldn't feel this, how about we do the exact opposite, make you feel it more, and refrain from trying not to?". In order to do exposure therapy, you have to set aside your preconceived ideas about whether the fear is appropriate and actually find out. When the dog visibly isn't threatening you, and you're actually looking at the fact that there's nothing scary, then you tend to start feeling less afraid. That's really all there is to it, and so if you can maintain a response to fear of "Oh wow, this is scary. I wonder if it's actually dangerous?" even as you feel fear, then you never develop a divergence between your feelings and what you feel is appropriate to feel, and therefore no problem that calls for a therapist or "shoulding" at yourself.
It's easier said than done, of course, but the point is that "I shouldn't feel this" doesn't actually work either instrumentally or epistemically.
There is an idea that I’ve sometimes heard around rationalist and EA circles, that goes something like “you shouldn’t ever feel safe, because nobody is actually ever safe”. I think there are at least two major variations of this:
I’m going to argue against both of these. If you already feel like both of these are obviously wrong, you might not need the rest of this post.
Note that I only intend to dispute the intellectual argument that these are making. It’s possible to accept on an intellectual level that it would make sense to feel safe most of the time, but still not feel safe. That kind of emotional programming requires different kinds of tools to deal with. I’m mostly intending to say that if you feel safe, you don’t need to feel bad about that. You don’t need to make yourself feel unsafe; for most people, it’s perfectly rational to feel safe.
I do expect some of the potential readers of this post to live in a very unsafe environment - e.g. parts of current-day Ukraine, or if they live together with someone abusive - where they are actually in constant danger. For them, it may make sense to feel unsafe all the time. (If you are one of them, I genuinely hope things get better for you soon.) But these are clearly situations where something has gone badly wrong; the feeling that one has in those situations shouldn’t be something that one was actively striving for. I think that any reader who doesn’t live in an actively horrendous situation would do better to feel safe most of the time. (Short timelines don't count as a horrendous situation, for reasons that I'll get into.)
As I interpret it, the core logic in both of the “you shouldn’t ever feel safe” claims goes as follows:
One thing that you might notice from looking at this argument is that one could easily construct an exactly opposite one as well.
That probably looks obviously fallacious - just because things can go well, doesn’t mean that it would be warranted to always feel safe. But why then would it be warranted to feel unsafe in the case where things just can go badly?
To help clarify our thinking, let's take a moment to look at how the US military orients to the question of being safe or not. More specifically, to the question of whether a given military unit is reasonably safe or whether it should prepare for an imminent battle.
Readiness Condition levels are a series of standardized levels that a unit’s commander uses to adjust the unit’s readiness to move and fight. Here’s an abridged summary of them:
Now, why are all units not always kept on REDCON-1, or at least on REDCON-2? After all, there could always be an unexpected need for the units to mobilize or fight on immediate notice. Even units based on the US mainland might be called in to deal with a terrorist attack (as happened on 9/11) or natural disaster at any time.
The obvious answer is that a higher REDCON burns resources and makes the unit incapable of doing the tasks that it would carry out at lower readiness levels. The soldiers get tired, running the engines consumes fuel, soldiers who would be at observation posts aren’t carrying out observations, and work and rest tasks aren’t being carried out.
Even though the military realizes that the unit could be needed at any time, setting their readiness condition isn’t just a question of whether it’s possible for the unit to be needed on short notice. It’s also a question of whether that’s likely enough to make the cost of maintaining a high state of readiness worth it in expectation.
I think that it makes sense for a individual person to also orient to the question of being safe or unsafe in a similar way. If someone claims that you shouldn’t ever feel safe, they are presumably saying that because they expect the feeling to translate to actions. They are saying that you should act as if you weren’t safe. But there is an opportunity cost to that; frequently thinking about possible threats burns mental cycles that could be used on something else, it makes it harder to rest and relax, and it biases the kind of information that you pay attention to.
In fact, I’m going to take the analogy a step further. I think that a person’s sense of (un)safety is in fact their subjective experience of an internal variable that tracks something analogous to the readiness condition level of that person’s body and brain.
This quote from Cosmides & Tooby (2000) describes some of the effects they suggest may be triggered when a person is alone at night and feels a fear of being stalked; or in this analogy, the kinds of responses that the body and brain associate when being at their equivalent of REDCON 1 or 2:
From this list, it’s pretty clear that it would be a bad idea to maintain this state all the time. Even if there were circumstances where it was theoretically possible for a person to be stalked, maintaining a constant state of fear would prevent them from digesting their food, relaxing, or for that matter thinking about anything other than how to get to safety.
And if everywhere was classified as an unsafe state (as one does if they say you should never feel safe), then the priority of “get somewhere safe” couldn’t do anything useful. The person would just be stuck in a constant anxiety loop that necessitated constant running or fighting but never recognized a state where those responses could be even temporarily wounded down.
In a more recent paper, Levy & Shiller (2020) discuss our subjective experience of threat as being linked to a series of unconscious computations about the expected distance to physical danger, such as a predator. They describe a series of threat levels that one could see as being analogous to the readiness conditions of a military unit (extra paragraph breaks and bolding of the different stages added):
If this model is right, it then implies that “feeling safe” does not imply an assessment of zero probability of threat. Feeling safe is the subjective experience of the brain assigning a low probability to a threat, and emphasizing the kinds of behavioral and biological priorities that make the best use of a low-threat situation.
Meanwhile, “feeling unsafe” implies an assessed level of at least ‘pre-encounter threat’, meaning at least a moderate probability of an immediate physical threat. “Moderate” in this context is a bit of a fuzzy term, given that even something like a 1% probability of there being a predator around could plausibly make it justified to maintain this level of readiness.
Still, a 99+% probability of being safe isn’t that high of a bar; extreme probabilities are common. If a person’s risk of being physically attacked was 1% per day, they would have a 97% chance of being attacked within a year. Some of the people reading this post may live in a location where their risk of being assaulted is around that order of magnitude, but if they aren’t, then constantly feeling unsafe implies that their subconscious isn’t calculating the probabilities correctly. (Probably warranting a diagnosis of an anxiety disorder.)
There is also the consideration that feeling unsafe doesn’t only imply a moderate probability of something bad happening; it implies a classification of the bad thing as “the kind of a thing that this type of a readiness response is useful for dealing with”. To extend the analogy to the military’s readiness levels, having your weapons manned and being ready to fight isn’t very useful if the threat being faced is an infectious disease, or Congress deliberating a funding cut that involves the unit in question being decommissioned.
Likewise, it isn’t very useful to activate behavioral responses that are evolved for the purpose of getting to safety from an imminent threat, if the threat involves unaligned AI possibly being developed within the next decade or so. There isn’t anywhere that you could run away to, nor a concrete enemy you could defeat, so this response would be stuck trying to do an impossible task. The mindset necessary for solving the problem is clear and calm thinking, and an ability to relax and get rest as well. So this is an instance of a situation where it might be warranted to feel safe even if you intellectually acknowledge that you might not be safe.
So to recap, I think that “you should never feel safe” is an incorrect argument for several reasons:
If you are somewhere where there is an actual tangible threat against you - then yes, feel unsafe! If you are approached by someone you know to be violent or abusive, or if you are out on a walk and you think that you might actually be stalked - then yes, feeling unsafe may very well be the right response.
But if those criteria aren’t met, you are probably better off feeling safe, and harnessing the resources that that state grants you.
Malcolm Ocean describes a form of this experience: