Same person as nostalgebraist but I didn't have an email attached to my old account so I had to make a new one temporarily.


Sorted by New

Wiki Contributions


IMO, the anthropic principle boils down to "notice when you are trying to compute a probability conditional on your own existence, and act accordingly."

A really simple example, where the mistake is obvious(ly wrong), is "isn't it amazing that we live on a planet that's just the right distance from its star (etc.) to support life?" No, this can't be amazing. The question presupposes a "we" who live on some planet, so we're looking for something like P(we live on a habitable planet | we are inhabiting a planet), which is, well, pretty high. The fact that there are many uninhabitable planets doesn't make the number any smaller. When I hear the phase "anthropic principle," this is the kind of prototype case I think of.

I don't think it's correct to phrase this as "you're not allowed to use the fact that you exist as evidence." Your own existence can be used as evidence for many propositions, like "habitable planets exist" or (more weakly) "being named [your first name] is not a crime punishable by death in your country." The point is, neither of those are conditioned on your existence. (For a case like the above, imagine your first name is punishable by death almost everywhere, and you find yourself marveling at the fact that you were, "of all places," born in the one tiny country where it isn't.)

The cosmic fine-tuning argument is a whole other can of worms, because we may not actually have any sensible choice of probability measure for those supposedly "fine-tuned" constants. (This was my knee-jerk reaction upon first hearing about the issue, and I stick by it; I vaguely remember reading something once that made me question this, but I can't remember what it was.) That is, when we talk about "if the constants were a little bit different..." we are using intuitions from the real world, in which physical quantities we observe are pushed and pulled by a complicated web of causes and usually cannot be counted on to stay within a very tiny range (relative to their magnitude). But if the universe is just what it is, full stop, then there is no "complicated web of causes," so this intuition is mis-applied.

As a purely philosophical issue, this is muddled by the way that fundamental physicists prefer to simplify things as far as possible. There is a legitimate complaint made by physicists that the many arbitrary parameters are "ugly," and a corresponding desire that they be reduced to something with fewer degrees of freedom, as the periodic table did for elements and the quark model did for hadrons. A desire for fewer degrees of freedom is not exactly the same thing as a desire for less fine-tuning, but the desires are psychologically related and thus easy for people to conflate -- both desires would be satisfied by some final theory that feels sufficiently "natural," a few clean elegant equations with no jagged funny bits sticking off of the ends.

This all seems like exploiting ambiguity about what your conditional probabilities are conditional on.

Conditional on "you will be around a supercritical ball of enriched uranium and alive to talk about it," things get weird, because that's such a low-probability event to begin with. I suspect I'd still favor theories that involve some kind of unknown/unspecified physical intervention, rather than "the neutrons all happened to miss," but we should notice that we're conditioning on a very low probability event and things will get weird.

Conditional on "someone telling me I'm around a supercritical ball of enriched uranium and alive to talk about it," they're probably lying or otherwise trolling me.

Conditional on "I live in a universe governed by the standard model and I'm alive to talk about it," the constants are probably tuned to support life.

Conditional on "the Cold War happened, lasted for a number of decades, and I'm alive to talk about it," humanity was probably (certainly?) not wiped out.

Once you think about it this way, any counterintuitive implications for prediction go away. For instance, we don't get to say nuclear cold wars aren't existentially dangerous because they aren't if we condition on humanity surviving them -- that's conditioning on the event whose probability we're trying to calculate! But we also can't discount "we survived the cold war" as (some sort of) evidence that cold wars might be less dangerous than we thought. For prediction (and evaluation of retro-dictions), the right event to condition on is "having a cold war (but not necessarily surviving it).