Epistemic Status: Quite confident (80%?) about the framework being very useful for the subject of free will. Pretty confident (66%?) about the framework being useful for meta-ethics. Hopeful (33%) that I am using it to bring out directionally true statements about what my CEV would be in worlds where we have yet to have found objective value.
Most discussions about free will and meaning seem to miss what I understand to be the point. Rather than endlessly debating the metaphysics, we should focus on the decision-theoretic implications of our uncertainty. Here's how I[1] think we can do that, using an abstracted Pascal's Wager.
Free Will: A Pointless Debate
People argue endlessly about whether we have Free Will, bringing up quantum mechanics, determinism, compatibilism, blah, blah (blah). But, regardless of if we have it or not:
In worlds where we have no free will:
* Our beliefs about free will don't matter (we'll do whatever we were determined to do)
* Our beliefs about what our actions should be don't matter and can't be changed (they were predetermined)
In worlds where we have free will:
* Our beliefs about free will will affect what we will do (and what we will will change what will happen) (possibly to a Will) (unless our will won't work)
* Our choices compound through causality, affecting countless other conscious creatures
Therefore, if we have free will, believing in it and acting accordingly is incredibly valuable. If we don't have free will, nothing we (choose to) believe matters anyway. The expected value clearly points towards acting as if we have free will, even if we assign it a very low probability (I don't think too much about what numbers should be here[2] but estimate it at 5-20%).
Meaning and All that is Valuable:
I've found I am unfortunately sympathetic to some nihilistic arguments:
Whether through personal passing, civilizational collapse, or the heat death of the universe, all information about our subjective experiences, be