I have two opposing beliefs about the title: it means "Don't believe two opposing things at once — you will fool yourself", or "Believe two opposing things at once — you will avoid fooling yourself."
There is this quote I got from a Rational Animations video: "The world is awful. The world is much better."
This seems like a convoluted way to just believe a single functional reasonable position between extremes.
It is mistaken to hold the 'reasonable' middle position that everything is neither 'great' nor 'inadequate' but 'middling'. In fact, some things are at one extreme, and some things are at the other!
I see now. It is both true that some people just need a friendly pointer to set them right and others are beyond saving.
No, a position between extremes is a one-dimensional thing.
And he is talking about something which should be understood as at least two dimensional (this reminds me of paraconsistent logic which tends to be modeled by bi-lattices with a “material” dimension and an “informational” dimension).
No, a position between extremes is a one-dimensional thing.
I disagree. "Some people are nice, some people are mean." is a middle position between "everyone is nice" and "everyone is mean".
No, what you say is like when people say about multi-objective optimization: “just consider a linear combination of your loss functions and optimize for that”.
But this reduction to one dimension loses too much information (de-emphasizes separate constraints and does not work well with swarms of solutions; similarly, one intermediate position does not work well for the “society of mind” multi-agent models of human mind, while separate (and not necessarily mutually consistent) positions work well).
One can consider a single position without losing anything if one allows it to vary in time. Like “let me believe this now”, “or let me believe that now”, or “yes, let me believe a mixture of positions X and Y, a(t)*X+b(t)*Y” (which would work OK if your a(t) and b(t) can vary with t in an arbitrary fashion, but not if they are constant coefficients).
One way or another, one wants to have an inner diversity of viewpoints rather than a unified compromise position. Then one can look at things from different angles.
There is one territory. There should be one map that corresponds to it. If one map predicts things well on one occasion and another predicts things well on another occasion, then both are clearly wanting and you need to combine them into an actually good map that isn't surprised half the time.
I think the math your sharing is muddling things. Maybe try math for some kind of predictor/estimator function that is a function of whichever inputs are required to predict accurately, be it time or whatever.
There might be one territory. That is, itself, a meta-belief.
Some people think that multi-verse is much closer to our day-to-day life than it is customary to think (yes, this, by itself, is controversial, however it is something to keep in mind as a possibility). And then “one territory” would be stretching it quite a bit (although, yes, one can still reify that into the whole multi-verse as the territory, so it’s still “one territory”, just it would be larger than our typical estimates of the size of that territory).
I don’t know. Let’s consider an example. Eliezer thinks chicken don’t have qualia. Most of those who think about qualia at all think that chicken do have qualia.
I understand how OP would handle that. How do you propose to handle that?
The “assumption of one territory” presumably should imply that grown chicken normally either all have qualia or all don’t have qualia (unless we expect some strange undiscovered stratification between chicken).
So, what is one supposed to do for an “intermediate” object-level position? I mean, say I really want to know if chicken have qualia. And I don’t want to pre-decide the answer, and I notice the difference of opinions. How would you approach that?
The relevant aspect of belief here is taking things seriously, so that a coherent picture does get developed around a (disbelieved) premise over time, instead of it getting considered in isolation from time to time and then ignored. Beliefs are either something at a better epistemic tier, or alternatively credence. For the former, exploratory premises (anchors for developing originally-unfamiliar framings) are a bad fit (if you are at risk of not taking it seriously, it's maybe not a solid fact), while the latter has nothing to do with working inside assumptions (it doesn't matter how likely your premise it, since to explore it you need to just take it seriously and run with it).
Exploratory premises/framings shouldn't be about opposition to other premises/framings, or to your beliefs, because they are not about you, not about the other things you believe, they are their own thing. What's true is that it's worth picking neglected ideas as seeds for exploration, and opposition to currently held beliefs is a good heuristic for neglectedness. But once accepted as an area of study, the premise/framing is no longer about the opposition that incited it, or else it gets needlessly and unfortunately warped around your pre-existing worldview.
In the pursuit of knowledge, there are two bad attractors that people fall into.
One of them is avoiding ever knowing anything. "Oh I could of course be wrong! Everything is only suggestive evidence, I can't really claim to know anything!"
The second is to really lean into believing in yourself. "I know this to be true! I am committed to it, for constantly second-guessing myself will lead to paralysis and a lack of decisiveness. So I will double down on my best hypotheses."
The former is a stance fearful of being shown to be wrong, and the ensuing embarrassment, so avoids sticking one's neck out. The latter is also fearful of being shown to be wrong, and so takes the tack of not thinking about ways one could be wrong.
I do not claim to solve this problem in full generality, but there is one trick that I use to avoid either of these mistakes: Believe two things. In particular, two things in tension.
What's an example of this?
Sometimes I notice how powerful human reasoning has been. We've discovered calculus, built rocket ships to the moon, invented all of modern medicine, etc. Plus, I notice all of the evidence available to me via the internet—so many great explanations of scientific knowledge, so many primary sources and documentation of events in the world, so many experts writing and talking. In such a headspace, I am tempted to believe that on any subject, with a little work, I can figure out what is true with great confidence, if I care to.
At other times, I notice that people have made terrible choices for a long time. There was so much rising crime until people figured out that lead in paint and gas caused lower IQ and increased violence. People thought obesity was primarily an issue with character rather than a medical issue. People on the internet constantly say and believe inane and false things, including prestigious people. I myself have made many many dumb and costly mistakes that I didn't need to.
I could choose to believe that I am a master of reality, a powerful rationalist that will leave no stone unturned in my pursuit of truth, and arrive at the correct conclusion.
Or I could believe we are all deeply fallible humans, cursed to make mistake after mistake while the obvious evidence was staring us in the face.
My answer is to believe both. I understand very little of what is true and I can come to understand anything. The space between these two is where I work, to move from one to the other. I shall not be shocked when I observe either of these, for they are both happening around me regularly.
The unknown is where I work. In Scott Garrabrant's sequence on Cartesian Frames, he frames knowledge and power as a dichotomy; either you can know how a part of the world is, or it can be in multiple states and you can have power over which state that part of the world ends up in. Similarly, I take two pieces of knowledge with opposing implications and associations; in holding both of these opposing beliefs, they set up a large space in the middle for me to have power over, where the best and worst possible outcomes are within my control.
A different route that people take to avoid fooling themselves, is to believe a claim (like "I can come to understand anything if I try hard enough") and then to remember that it's conditional on trying hard enough. They try to hold on to that concrete claim, and add a fuzzy uncertainty around what it means for a given situation. I find this is less effective than holding onto two concrete claims that are in tension, where the two claims imply that there are other things that you don't know.
The mistake that I think people make is to remember the one claim they do know, and act as though that's all there is to know. If "We will probably all die soon due to AI" is the only thing that you believe, it seems like you know all that is relevant, all that you need to know (the current trajectory is bad, a pause would be good, alignment research is good, etc). But when you add that "We have the potential to survive and flourish and live amongst the stars" then suddenly you realize there's a lot of other important questions you don't know the answer to, like what our final trajectory might look like and what key events will determine it.
You might be interested to know where I picked up this habit. Well I'll tell you. It started when I read Toni Kurz and the Insanity of Climbing Mountains by GeneSmith. See, until that point I had assumed that the story of my life would make sense. I would work on some important projects, form relationships with smart/wise/competent people, accomplish some worthy things, before dying of old age/the singularity would happen.
Then I read that and realized that the story of these people's lives made no sense at all.
I think that there are two natural options here. The first one was to ignore that observation, to blind myself to it, and not think about it. Of course the story of my life would make sense, why would I choose otherwise?
The other route that people take is to conclude that life is ultimately absurdist, where crazy things happen one after another with little sense to it in-retrospect.
As I say, I was only tempted by the first one, but the essay was a shock to my system and helped me stop blinding myself to what is described. Instead of blinding myself or falling into absurdism, I instead believe two things.
Now all that's left for me is to understand how and why different lives fall into these two different buckets, and then do the hard work to make the first one obtain rather than the latter.
To end with, here are some more beliefs-in-tension that I've come by in the course of my life. Please share your own in the comments!
You might think that I put this quote here because I read "The Keys to the Kingdom" series as a child and loved it. However, that is not why; I put it here because a fellow Inkhaven-er was telling me about Garth Nix, I looked up quotes by him in the convo, and noticed this one was relevant for my post.
However your assumption that I read the series as a child and loved it would be a justified-true-belief because, on clicking through the author's Wikipedia page, I suddenly remembered that I had read the series as a child and loved it! But I entirely forgot about that in the course of finding the quote and choosing to use it.
Good luck with your future accurate-belief-acquiring endeavors.