See 'valley of bad rationality'; of course an incremental move towards rationality is not always ideal (and some moves are not actually towards rationality). But see also generalizing from fictional evidence; empirically it tends to be a good idea.
Sorry I can't tell if you meant to link to actual articles or just the definitions. If it is just the definitions, I can't tell how they relate to the statements that follow the links.
Overall unclear what you mean by "incremental move towards rationality is not always ideal" and how it relates to the post.
Gina and Lucius do not exist. You have constructed these imaginary people in order to be examples of your desired conclusion, which you then use to support that conclusion. You could equally well imagine a miserable version of Gina and a happy version of Lucius, should you want to boom rationality over ignorance. However vivid your imagination, daydreaming is not evidence.
This is a bizarre comment. Aren't most examples hypothetical? When you start a math question with "Jack buys 3 melons", does Jack need to be a real guy who actually bought 3 melons?...
The true question is whether the example is consistent with real empirical evidence. It appears to me perfectly possible and in fact anecdotally true that two people might have very similar beliefs with the exception of a small subset - which has little decision value but strong intrinsic value, in favour of the apparently less accurate beliefs. Which follows that it seems reasonable and realistic that holding a small and non-decision relevant set of "false" beliefs might turn out to be beneficial.
Although as I mentioned, the degree to which one can make this a conscious strategy is very much arguable.
If you are reading this, chances are that you strive to be rational.
In Game Theory, an agent is called "rational" if they "select the best response - the action which provides the maximum benefit - given the information available to them."
This is aligned with Eliezer's definition of instrumental rationality as "making decisions that help you win."
But crucially, Eliezer distinguishes a second component of rationality, epistemic rationality - namely, forming true beliefs. Why is this important to "win"?
Very trivially, because more accurate beliefs - beliefs that better reflect reality- can help you make better decisions. Thus, one would generalise that "truth" is inevitably a net positive for an individual (see Litany of Gendlin).
But is this true? Excluding hedge cases when your knowledge of the truth itself dooms you to suffer - e.g., witnessing a murder, thus being murdered to be kept silent - is knowing the truth always a net positive?
(Note that here "benefit" and "win" are defined subjectively, based on your own preferences, whatever they might be. Also, we are using "truth" as a binary feature for simplicity, but in reality, beliefs should be graded on a spectrum of accuracy, not as 0 or 1).
We can do a bit better than just saying "true beliefs make for better decisions". We can quantify it.
We can define the decision value of belief X as the total payoff of the actions that the agent will select, given their knowledge of X, minus the total payoff of the actions they would have taken under their previous/alternative belief.
In other words, how much of a difference does it make, in terms of the outcome of their decisions over their lifetime, whether they hold belief X or the next best alternative.
A few interesting insights can be derived from this definition:
The last one in particular is very consequential. We might be enamoured with our scientific theories and the equations of general relativity and quantum physics. But for most people, in their day-to-day, believing that gravity behaves according to general relativity, to Newtonian gravitation or to "heavy things fall down", makes almost no difference.
This is - in my humble opinion - the main reason why, sadly, rationalists don't systematically win. This is of course context dependent, but chances are that most of your scientific knowledge, most of your keep of the secrets of the universe, is - tragically - pretty much useless to you.
Now, to address some easy counterarguments:
« Well, alright », you say, « you are showing that a bunch of true beliefs are quite useless, but this just sets the value of those beliefs to 0, not to a negative number. Thus, "truth" overall is still a net positive ».
Not so fast.
We've talked about the decision value of beliefs - how much they help you make better decisions in your life. But is that all there is to knowledge? Not by a long shot.
Knowledge (the set of your beliefs) has, in fact, another type of value: intrinsic value. This is the value (payoff/benefit/happiness) that you derive directly from holding a particular belief.
When a high schooler thinks that their crush is in love with them, that simple belief sends them over the moon. In most cases, it will have minimal decision value, but the effect on their utility is hard to overstate.
So a true belief - even if useless - can make you happier (or whatever other dimension you optimise on), and thus it is valuable.
But it works both ways.
A false belief, even if it will - on average - have negative decision value when compared to a true belief, might have a sufficiently high intrinsic value to make the overall delta positive. Namely, having a false belief will make you happier and better off.
Don't believe me? Let's look at an absolutely random example.
Gina believes in an omnipotent entity called Galactus. She believes that Galactus oversees the universe with its benevolence and directs the flows of events towards their destiny. Every morning, she wakes up with a smile and starts her day confident that whatever happens, Galactus has her back. But she doesn't slouch! She works hard and tries to make the most intelligent decisions she can, according to her best understanding of the latest science. After all, Galactus is also the patron of science and intelligence!
Lucius doesn't believe in any such thing. He is a stone-cold materialist and aspiritualist who spends his free time arguing online about the stupidity of those Galactus' believers and how they will never understand the true nature of life and the universe. He also makes an effort to make the most intelligent decisions he can, trying to prove to the world that rationalists can win after all. But every day is a challenge, and every challenge is a reminder that life is but a continuous obstacle race, and you are running solo.
Note that with the exception of their differences about spirituality, Gina and Lucius have pretty much the same beliefs, and coincidentally very similar lives (they both aspire to live a "maximum happiness life"). You might wonder how Gina can reconcile her spiritual belief with her scientific knowledge, but she doesn't have to. She is very happy never to run a "consistency check" between the two. To Lucius' dismay.
« How can you not see how absurd this Galactus thing is?» he says, exasperated, as they share a cab to the airport.
« It doesn't seem absurd to me » Gina answers with a smile.
Lucius starts wondering whether she actually believes in Galactus, or only believes that she believes in the deity. Wait, was that even possible? He can't remember that part of the Sequences too well...he'll have to reread them again. Can one even choose one's own beliefs? Could he now decide to believe in Galactus? Probably not...if something doesn't feel right, doesn't feel "true", if something doesn't...fit with your world view, it's just impossible to force it in. Well, maybe he can show Gina why her belief doesn't fit? Wait, would that be immoral? After all, she seems so happy...
Unfortunately, Lucius doesn't get to make that last decision. The cab driver is looking at his phone and doesn't see that the car in front has stopped suddenly. In half a dozen seconds, everything is over.
So who "won"? Who has lived the "better" life? Gina or Lucius?
I think Gina won by a mile.
On a day-to-day basis, their decisions were practically identical, so the decision values of their beliefs in spirituality were virtually 0. Lucius worked very hard because he believed that in a world without "spirits" he was the only one he could count on. But Gina worked hard because she believed that that's what a good Galactusean should do. Lucius believed in science and rationality because it's the optimal decision strategy. Gina believes in them because it's what Galactus recommends. Etc.
You might argue that in some obscure node of the graph, Lucius' beliefs were inevitably more accurate and thus led to marginally better decisions. But even so, I think Gina has a huge, enormous, easily-winning asset on her side: optimism.
Every day, Gina woke up believing that things would be okay, that whatever happened, Galactus had a plan in mind for her. Galactus had her back.
Every day, Lucius woke up believing that he had to fight even harder than the day before, because whatever happened, he could only count on himself. No fairy godfather had his back.
"Happiness" ("Utility", "Life satisfaction") doesn't depend only on what you feel and experience now. It also depends on what you expect for your future.
And when your "true" beliefs negatively affect your expectations, without a counteracting sufficient improvement in life outcomes, you might have been better off with false ones.
Whether you can, in fact, choose your beliefs is beyond the scope of this essay. But I leave you with the same question Lucius was asking:
Should he have really tried to show Gina that she was wrong, knowing what you know now?