I think the strongest version of this idea of adding p to normality is "new evidence/knowledge that contradicts previous beliefs does not invalidate previous observations." Therefore, when one's actions are contingent on things happening that have already been observed to happen, things add up to normality because it is already known that those things happen -- regardless of any new information.But this strict version of 'adding up to normality' does not apply in situations where one's actions are contingent on unobservables. ... (read more)

Showing 3 of 5 replies (Click to show all)
5orthonormal11dDon't know if you saw, but I updated the post yesterday because of your (and khafra's) points. Also, your caveat is a good reframe of the main mechanism behind the post. I do still disagree with you somewhat, because I think that people going through a crisis of faith are prone to flailing around and taking naive actions that they would have reconsidered after a week or month of actually thinking through the implications of their new belief. Trying to maximize utility while making a major update is safe for ideal Bayesian reasoners, but it fails badly for actual humans. In the absence of an external crisis, taking relatively safe actions (and few irreversible actions) is correct in the short term, and the status quo is going to be reasonably safe for most people if you've been living it for years. If you can back off from newly-suspected-wrong activities for the time being without doing so irreversibly, then yes that's better.
3Isnasene10dAh, yeah I agree with this observation -- and it could be good to just assume things add up to normality as a general defense against people rapidly taking naive actions. Scarcity bias [https://en.wikipedia.org/wiki/Scarcity_(social_psychology)] is a thing after all and if you get into a mindset where now is the time to act, it's really hard to prevent yourself from acting irrationally.

Huzzah, convergence! I appreciate the points you've made.

Adding Up To Normality

by orthonormal 3 min read24th Mar 202019 comments

68


Pretty Similar To: Leave a Line of Retreat

"It all adds up to normality." Greg Egan, Quarantine

You're on an airplane at 35,000 feet, and you strike up a conversation about aerodynamic lift with the passenger in your row. Things are going along just fine until they point out to you that your understanding of lift is wrong, and that planes couldn't fly from the effect you thought was responsible.

Should you immediately panic in fear that the plane will plummet out of the sky?

Obviously not; clearly the plane has been flying just fine up until now, and countless other planes have flown as well. There has to be something keeping the plane up, even if it's not what you thought, and even if you can't yet figure out what it actually is. Whatever is going on, it all adds up to normality.

Yet I claim that we often do this exact kind of panicked flailing when there's a challenge to our philosophical or psychological beliefs, and that this panic is entirely preventable.


I've experienced and/or seen this particular panic response when I, or others, encounter good arguments for propositions including

  • My religion is not true. ("Oh no, then life and morality are meaningless and empty!")
  • Many-worlds makes the most sense. ("Oh no, then there are always copies of me doing terrible things, and so none of my choices matter!")
  • Many "altruistic" actions actually have hidden selfish motives. ("Oh no, then altruism doesn't exist and morality is pointless!")
  • I don't have to be the best at something in order for it to be worth doing. ("Oh no, then others won't value me!") [Note: this one is from therapy; most people don't have the same core beliefs they're stuck on.]

(I promise these are not in fact strawmen. I'm sure you can think of your own examples. Also remember that panicking over an argument in this way is a mistake even if the proposition turns out to be false.)

To illustrate the way out, let's take the first example. It took me far too long to leave my religion, partly because I was so terrified about becoming a nihilist if I left that I kept flinching away from the evidence. (Of course, the religion proclaimed itself to be the origin of morality, and so it reinforced the notion that anyone else claiming to be moral was just too blind to see that their lack of faith implied nihilism.)

Eventually I did make myself face down, not just the object-level arguments, but the biases that had kept me from looking directly at them. And then I was an atheist, and still I was terrified of becoming a nihilist (especially about morality).

So I did one thing I still think was smart: I promised myself not to change all of my moral rules at once, but to change each one only when (under sober reflection) I decided it was wrong. And in the meantime, I read a lot of moral philosophy.

Over the next few months, I began relaxing the rules that were obviously pointless. And then I had a powerful insight: I was so cautious about changing my rules because I wanted to help people and not slide into hurting them. Regardless of what morality was, in fact, based on, the plane was still flying just fine. And that helped me sort out the good from the bad among the remaining rules, and to stop being so afraid of what arguments I might later encounter.

So in retrospect, the main thing I'd recommend is to promise yourself to keep steering the plane mostly as normal while you think about lift (to stretch the analogy). If you decide that something major is false, it doesn't mean that everything that follows from it has to be discarded immediately. (False things imply both true and false things!)

You'll generally find that many important things stand on their own without support from the old belief. (Doing this for the other examples I gave, as well as your own, is left to you.) Other things will collapse, and that's fine; that which can be destroyed by the truth should be. Just don't make all of these judgments in one fell swoop.

One last caution: I recommend against changing meta-level rules as a result of changing object-level beliefs. The meta level is how you correct bad decisions on the object level, and it should only be updated by very clear reasoning in a state of equilibrium. Changing your flight destination is perfectly fine, but don't take apart the wing mid-flight.

Good luck out there, and remember:

It all adds up to normality.

[EDIT 2020-03-25: khafra and Isnasene make good points about not applying this in cases where the plane shows signs of actually dropping and you're updating on that. (Maybe there's a new crisis in the external world that contradicts one of your beliefs, or maybe you update to believe that the thing you're about to do could actually cause a major catastrophe.)

In that case, you can try and land the plane safely- focus on getting to a safer state for yourself and the world, so that you have time to think things over. And if you can't do that, then you have no choice but to rethink your piloting on the fly, accepting the danger because you can't escape it. But these experiences will hopefully be very rare for you, current global crisis excepted.]

68