Today's post, Mere Messiahs was originally published on 02 December 2007. A summary (taken from the LW wiki):

 

John Perry, an extropian and a transhumanist, died when the north tower of the World Trade Center fell. He knew he was risking his existence to save other people, and he had hope that he might be able to avoid death, but he still helped them. This takes far more courage than someone who dies, expecting to be rewarded in an afterlife for their virtue.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Superhero Bias, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New Comment
15 comments, sorted by Click to highlight new comments since: Today at 6:27 AM

The story (not a tale, since it's a real) of John Perry was really heart-breaking. I already had that "mere messiahs" idea, since I was already thinking (and sometimes saying) that an atheist who sheltered Jews (or gypsies, or homosexuals, or any other target of the nazi hatred) during World War II, risking his own life to save a handful of humans from a dreadful fate, and knowing that if he was caught, he would not just be "killed", but utterly obliterated, that he'll not just go to "the next adventure", but it'll be the end, period, was to me a higher hero than any messiah who was believing to go to Heaven afterwards.

But John Perry... he was actually believing he could be immortal. It's not a couple of decades of life he risked, but eternity. That... I don't have a word to describe it. When I read it, I was speechless for awhile.

And then I thought about myself and got the understanding of how it's possible. Before I heard about crynoics, I would have sheltered Jews if a new Hitler came. Or at least, I believe I would (you can never tell for sure when you didn't actually do it). Right now, when I'm still undecided about cryonics, I would still do it. Even if I sign to a cryonics institute one day, I would still do it. Somehow, all adds up to normality. Signing to cryonics shouldn't make you less altruistic, or it wouldn't be worth it. To say it in HP:MoR way : I don't want to lose my ability to cast the Patronus charm by signing to cryonics.

If I were actually willing to risk all of my remaining observer-moments in order to (e.g.) shelter Jews from Hitler, and my actual willingness to do that were not noticeably affected by (e.g.) signing up for cryonics, I would probably conclude that I don't actually believe that signing up for cryonics significantly increases my expected number of observer-moments, but rather was experiencing a "belief in belief" in immortality through cryonics.

Hum, no, it means that I don't use raw consequentialism/utilitarianism as my ethical framework. I consider them to be theoretically valid, but not directly usable by humans who are unable to forsee all the consequences of their act, and who have so many biases.

So while I can use consequentialism to reason on meta-ethics, and even to amend my ethical rules, I don't use it to take ethical decisions. I don't trust myself to do that. And in my current framework of ethics, sheltering Jews from Hitler is something that should be done, regardless of the risk taken by myself.

(nods) There's a reason I didn't say anything about what you would do, but rather about what I would.

That said, I'm curious... can you clarify what you mean by "amend my ethical rules" here?

For example... so, OK, at the moment your ethical rules include the rule that "sheltering Jews from Hitler is something that should be done regardless of the risk [to you]". Let's assume for simplicity that it also includes no rules that conflict with that rule. It follows that, given the choice to shelter a Jew or not, you shelter... that's straightforward; no evaluation of consequences is necessary.

Now, suppose now you come to believe that your shelter has been compromised and is under Nazi observation, such that any Jew you shelter will be killed. It seems to follow straightforwardly that you still shelter the Jew, because actually prolonging the Jew's life is irrelevant... that's a mere consequence.

But your reference to amending your ethical rules suggests that it might not be that simple. Might you, in this hypothetical example, instead "use consequentialism to [...] amend [your] ethical rules" so that they no longer motivate you to shelter the Jew in situations where doing so leads to the Jew's death?

Well, we'll start playing on words right now... "shelter" according to freedictionary means "a. Something that provides cover or protection, as from the weather." If Nazi are observing my home, then it's no longer a "shelter", but it becomes a "trap", in the meaning I was giving to "shelter".

But the ethical rule is not actually "shelter Jews from Hitler", but more "protect if you can people who are threatened of something horrible while they didn't do anything to deserve it", or something like that. It's not even explicit in terms of words, I'm not trying to write a legal contract with myself.

And of course I'll need to evaluate some consequences of my acts in order to chose what to do. I don't say I don't use consequentialism at all, just that I don't use it "raw". I won't re-evaluate the consequences of sheltering Jews against Hitler (or anyone pursued by a hateful dictatorship) in terms of risk for myself and of benefit for them when put in front of the choice. Partly because I know that then it'll be easy to rationalize a reason to not take the risks ("but if I don't I'll be able to save more later on" or whatever).

What I was referring to by "amend my ethical rules" is that I do theoretical reasoning about what should be my ethical rules, and doing so I can change them. But that I refuse to do during actual pressure of an actual dilemma, because I don't trust myself to perform well enough under pressure, I know rationalizing is easy, and I know even that just doing maths under pressure leads to a higher error rate.

Somehow reminds me that article of the French Constitution saying that the Constitution cannot be changed during a war - a clear reference to WW2 and the way Pétain changed the Constitution, but I found it very interesting as a more general guideline : don't change your most fundamental rules while under heavy pressure.

But we somehow drifted a lot from the initial topic, sorry for the noise ;)

I certainly agree that, given the choice, I'd rather have the opportunity to think carefully about what I ought to do in a situation. But while I'm aware that performing analysis under pressure is error-prone, I'm also aware that applying rules derived from one situation to a different situation without analysis is error-prone. In the real world, it's sometimes a choice among imperfect options.

Partly because I know that then it'll be easy to rationalize...But that I refuse to do during actual pressure of an actual dilemma, because I don't trust myself to perform well enough under pressure

My glasses distort light. A person with perfect vision wouldn't wear them. They are calibrated to counterbalance my deficiencies.

What is the ideal moral system that someone who didn't rationalize would use?

You could genuinely believe it, and be willing to make predictions or even bets based on your belief, but that doesn't mean you've internalized it.

I agree that there's a distinction here, though it strikes me as one of degree rather than kind.

I would say the same thing about the two dragon-claiming, garage-owning neighbors... in both cases, their minds fail to associate representations of the dragon in their garage with various other representations that would constrain their behavior in various ways. Whether we call that "belief in belief" or "failure to internalize" or "not thinking it through" or "being confused" or "not noticing the implications" or "failing to be a tactical genius" or "being really stupid" depends on a lot of different things.

That said, I don't think the purely labeling question matters much; I'm happy to adopt your preferred labels if it facilitates communication.

If I'm understanding your comment correctly, you're suggesting that the threshold between "belief in belief" and "failure to internalize" in this case has to do with the willingness to make predictions/bets -- e.g., if I'm willing to give someone a large sum of money in exchange for a reliable commitment to give me a much much larger sum of money after I am restored from cryonic suspension, then we say I have a "genuine belief" in cryonics and not a mere "belief in belief", although I might still failed to have an "internalized belief"... is that right?

If so, then sure, I agree... in the situation I describe, I might have a genuine but non-internalized belief in cryonics.

I would argue that the only difference between "belief in belief" and "failure to internalize" is whether the belief in question corresponds to external reality. The state of the brain is exactly the same in both situations.

What does external reality have to do with it? Can I not have belief in belief in a proposition that happens to describe reality?

If I'm understanding your comment correctly, you're suggesting that the threshold between "belief in belief" and "failure to internalize" in this case has to do with the willingness to make predictions/bets -- e.g., if I'm willing to give someone a large sum of money in exchange for a reliable commitment to give me a much much larger sum of money after I am restored from cryonic suspension, then we say I have a "genuine belief" in cryonics and not a mere "belief in belief", although I might still failed to have an "internalized belief"... is that right?

Sounds right.

Cool.

Having clarified that: can you say more about why the distinction (between belief in belief in cryonics, and genuine but non-internalized belief in cryonics) is important in this context? That is... why do you bring it up?

Well, if someone had belief in belief in cryonics, they might say that cryonics would preserve people and allow them to be brought back in the future, but every time they have to make a falsifiable prediction based on it, they'll find an excuse to avoid backing that prediction. If they're willing to make falsifiable predictions based on their belief, but go on behaving the same as if they believed they only had an expectation of living several decades, they probably only have a far mode belief in cryonics.

It takes different input to bring a far mode belief into near mode than to convince someone to actually believe something they formerly only had belief in belief in.

It seems like the major difference here is compartmentalization. Someone who only takes a belief into account when it is explicitly called to their attention has an belief that is not internalized.