If you ever find yourself saying, "Even if Hypothesis *H* is true, it doesn't have any decision-relevant implications," *you are rationalizing!* The fact that *H* is interesting enough for you to be considering the question at all (it's not some arbitrary trivium like the 1923th binary digit of π, or the low temperature in São Paulo on September 17, 1978) means that it must have some relevance to the things you care about. It is *vanishingly improbable* that your optimal decisions are going to be the *same* in worlds where *H* is true and worlds where *H* is false. The fact that you're tempted to *say* they're the same is probably because some part of you is afraid of some of the imagined consequences of *H* being true. But *H* is already true or already false! If you happen to live in a world where *H* is true, and you make decisions as if you lived in a world where *H* is false, you are thereby missing out on all the extra utility you would get if you made the *H*-optimal decisions instead! If you can figure out exactly what you're afraid of, maybe that will help you work out what the *H*-optimal decisions are. Then you'll be a better position to successfully notice which world you *actually* live in.