On a warm spring weekend, Jerry B wanders through Hyde Park. At a corner, he happens upon the Preacher Man, standing on a soapbox and proclaiming the Way of Truth.
Preacher Man: … and they will come to you and they will say “Do not believe such things, for it is hurtful to others to believe such things!”. Or they will say “Do not believe such things, for it is unpleasant to believe such things, and you can do nothing about it regardless!”. Or they will say “Do not believe such things, for it is hard to hide your beliefs, and others will punish you for it when they find out!”. And all these entreaties you must ignore; let them sway your belief neither in one direction nor the other, but rather set them aside and consider only the evidence of what is. Only by putting Truth before Utility will you be able to achieve the greatest utility available to you.
Jerry B: Now hold up, Preacher Man. I don’t think I buy what you’re selling. Let’s try an example.
Preacher Man: (Smiles approvingly.) You doubt my prescription, yet you neither brush it aside, nor blindly accept it; you argue back and make things concrete. That is the Way of Truth. Please, go on.
Jerry B: Consider belief in the afterlife. Sure, obviously there is no afterlife as a factual matter, but belief in the afterlife spares so much pain for so many people.
… and I will readily admit that, in practice, the utilitarian calculus is not obviously in favor of belief in the afterlife. Certainly various religions exploited that belief, and burned huge amounts of real resources in service of that belief. Likely the technologies available to actually fight back against death would have seen far greater investment far sooner without belief in the afterlife, and the world would be concretely better as a result.
… but let’s set aside the specifics of our world’s history, and focus on a thought experiment. Suppose a single person, sometime far back in history, could choose whether to believe in the afterlife. There’s no organized religion involved, trying to exploit her; nor will she spread her belief (which could cause an organized religion to spring into existence). And she doesn’t have significant options available to fight back against death anyway. The choice influences only her, and has little impact on her actions, just her own happiness. She will be happier if she believes in the afterlife. So the utilitarian calculus works out in favor of her believing in the afterlife.
Again, I don’t necessarily claim that belief in the afterlife works this way in reality. But the thought experiment shows that it’s at least possible for situations to come up in which truth must be set aside for maximum utility. And given that it’s possible, it would be rather surprising if such situations never came up in reality.
Preacher Man: We will get to my main answer shortly, but first I note that you’ve assumed a rather specific flavor of utilitarianism: hedonic utilitarianism. A non-hedonic utilitarian could declare “I want my lost loved ones to actually live on somewhere, not to merely believe they live on somewhere”; that would be a perfectly valid utility function. Indeed, from a decision-theoretic standpoint an agent which generally wants the world to be in preferred states, and not merely to believe the world is in preferred states, is considerably less dysfunctional; an agent which merely wishes to believe things will tend to wirehead.
Jerry B: Fair. And we could have a whole debate about the extent to which hedonic utilitarianism accurately describes human values. But is it cruxy?
You claimed earlier that “Only by putting Truth before Utility will you be able to achieve the greatest utility available to you.”. Does that claim rely on an assumption (or perhaps empirical observation) that human values are not well-modeled as hedonic utilitarianism? Or do you claim that even a hedonic utilitarian would not achieve the greatest utility available to them without putting Truth before Utility?
Preacher Man: I claim that even a hedonic utilitarian would not achieve the greatest utility available to them without putting Truth before Utility.
Jerry B: Does my thought experiment not directly disprove that claim? For the hypothetical hedonic utilitarian woman, deciding only for herself whether to believe in the afterlife, and with few or no realistically-available choices which depend on that belief… it sure seems like the utility calculus must go in favor of belief in the afterlife.
Preacher Man: I agree that the utility calculus for that hypothetical woman goes in favor of belief in the afterlife.
Jerry B: Yet you still claim that even a hedonic utilitarian would not achieve the greatest utility available to them without putting Truth before Utility?
Preacher Man: Indeed. Note that your thought experiment required assuming away much of how belief-in-the-afterlife works in the real world; that was load-bearing. I claim that in the real world, even a hedonic utilitarian would not achieve the greatest utility available to them without putting Truth before Utility. I do not make this claim about all hypothetical agents in all hypothetical worlds; I make this claim about humans in our world.
Jerry B: Quite the claim. How will you defend it?
Preacher Man: (Laughs.) I am no lawyer; my goal is not to defend a claim. Rather, my goal is to communicate the model, so you can see that it is at least a coherent way-the-world-could-be, a hypothesis to which some probability is assigned. And then you will go forth and judge the truth of the claim for yourself.
Jerry B: Very well. What strange shape might our particular world take, such that even a hedonic utilitarian would not achieve the greatest utility available to them without putting Truth before Utility?
Preacher Man: Let’s briefly use an analogy: what happens when you connect a computer running Windows XP directly to the internet and turn off the firewall, in 2024?
Jerry B: Ten minutes later, it will be infected by malware, and within something like hours to days it will cease to meaningfully be under your control at all. It will instead be under the control of a botnet, or some spyware waiting for you to open a bank website, or whoever won the battle for control of the computer.
Preacher Man: Exactly. Heck, even with a firewall on a Windows XP machine is a prime target.
Now for the analogy: human brains are like Windows XP machines. Most of our brains are exposed to the internet, and to human culture more generally. We’re prime targets for memetic “viruses”. And a Truth-first commitment is like a firewall.
Jerry B: Ok, I’m starting to see the analogy here. But why a Truth-first commitment specifically?
Preacher Man: As the saying goes: “First they came for the epistemology. We don’t know what happened after that.”. Getting a little more gearsy about it…
Insofar as a meme is centered on a factual claim, the claim gets entangled with lots of other facts about the world; it's the phenomenon of Entangled Truths, Contagious Lies. So unless the meme tries to knock out a person's entire epistemic foundation, there's a strong feedback signal pushing against it if it makes a false factual claim.
But the Entangled Truths phenomenon is epistemic; it does not apply nearly so strongly to values. If a meme claims that, say, it is especially virtuous to eat yellow cherries from Switzerland... well, that claim is not so easily falsified by a web of connected truths.
As a result, parasitic memeplexes tend to be mostly claims about values (i.e. what’s good, healthy, cringe, etc) as opposed to epistemic claims, because value claims are much easier to bullshit.
Jerry B: Pause a moment, let me check that your abstract description actually binds to the example of belief in the afterlife.
As a purely factual matter, the simplest versions of an afterlife would have all sorts of observable effects in our world, which we do not observe. “Entangled truths”, as you say. One can go through some fairly precise mental gymnastics to hypothesize precisely the sort of afterlife which is not observable at all, but then that itself is rather suspicious.
That said, religious memeplexes do make factual claims about the afterlife! And other factual claims too, though admittedly the factual claims seem to be steadily rolled back century after century, while the value claims seem to have a much longer shelf life (though they too evolve with the memetic tides). So I suppose you’ll claim the process of religions’ factual claims being falsified over time is exactly the sort of thing you’re talking about?
Preacher Man: That is what I’ve been talking about so far, but there’s a big loophole.
It’s usually much easier to bullshit value claims than epistemic claims. But that means there’s a lot of memetic selection pressure looking for ways to bullshit an epistemic claim by replacing it with a value claim. In particular, there’s lots of memetic selection pressure for arguments that it’s virtuous, or beneficial, or good, or [etc] to believe X, for various value claims X. Because any time a meme could reproduce more efficiently by spreading a false epistemic claim, replacing that epistemic claim with a value claim is a likely-easier alternative fitness strategy.
In the case of the afterlife, that would mean that there’s lots of memetic selection pressure for arguments that it’s virtuous/beneficial/good/etc to believe in an afterlife. That sort of argument can sidestep the “epistemic firewall”, allowing a memeplex to exploit factual claims as part of its reproduction cycle, by motivating the host to not question those factual claims (because the host believes that e.g. it’s beneficial to believe in the afterlife, so they don’t want to question it).
Jerry B: Ok, but it might in fact be beneficial to believe in the afterlife.
Preacher Man: Maybe. But you know what else is beneficial? Having a firewall in place, which blocks out any attempts to update your epistemic beliefs based on what’s virtuous/beneficial/good/[etc] to believe. I.e. “truth first”.
Today’s world is an absolute hotbed of memeplexes, evolving far more quickly than our own brains. We are the Windows XP computers in this environment. In practice, if you don’t put truth first, if you don’t strictly firewall attempts to update your factual beliefs based on arguments about what’s good to believe (as opposed to what’s true), you’re going to get hacked, and it’s not going to take very long. And once hacked like that, you’re not going to be able to fix it easily, because value claims have much weaker feedback from reality than factual claims.
I wish to believe that (it's beneficial to believe in the afterlife) if-and-only-if (it is beneficial to believe in the afterlife). But if that belief is wrong, it's going to be a hard one to get feedback on, and I know it's under attack by the memeplexes. So even if I did think it's probably beneficial to believe in the afterlife, I wouldn't just go believing in the afterlife. At a policy level, it's probably worth it to keep the epistemic firewall in place.