At the last meetup of our local group, we tried to do Eliezer's homework problem on free will. This post summarizes what we came up with.
Debates on free will often rely on questions like "Could I have eaten something different for breakfast today?". We focused on the subproblem of finding an algorithm that answers "Yes" to that question and which would therefore - if implemented in the human brain - power the intuitions for one side of the free will debate. We came up with an algorithm that seemed reasonable but we are much less sure about how closely it resembles the way humans actually work.
The algorithm is supposed to answer questions of the form "Could X have happened?" for any counterfactual event X. It does this by searching for possible histories of events that branch off from the actual world at some point and end with X happening. Here, "possible" means that the counterfactual history doesn't violate any knowledge you have which is not derived from the fact that that history didn't happen. To us, this seemed like an intuitive algorithm to answer such questions and at least related to what we actually did when we tried to answer them but we didn't justify it beyond that.
The second important ingredient is that the exact decision procedure you use is unknown to the part of you that can reason about yourself. Of course you know which decisions you made in which situations in the past. But other than that, you don't have a reliable way to predict the output of your decision procedure for any given situation.
Faced with the question "Could you have eaten something different for breakfast today?", the algorithm now easily finds a possible history with that outcome. After all, the (unknown) decision procedure outputting a different decision is consistent with everything you know except for the fact that it did not in fact do so - which is ignored for judging whether counterfactuals "could have happened".
Questions we haven't (yet) talked about:
- Does this algorithm for answering questions about counterfactuals give intuitive results if applied to examples (we only tried very few)? Otherwise, it can't be the one used by humans since it would be generating those intuitions if it were
- What about cases where you can be pretty sure you wouldn't choose some action without knowledge of the exact decision procedure? (e.g. "Could you have burned all that money instead of spending it?")
- You can use your inner simulator to imagine yourself in some situation and predict which action you would choose. How does that relate to being uncertain about your decision procedure?
So even though I think our proposed solution contains some elements that are helpful for dissolving questions about free will, it's not complete and we might discuss it again at some point.
I think when people say “Could I have done X?” We can usually interpret it as if they said “Could I have done X had I wanted to?”
but that just kicks the can down the road, leaving the question: "Could I have wanted X?"
It does, but that isn't a decisive refutation of anything.
Eh. The next question to ask is going to depend entirely upon context. I feel like most of the time people use it in practice they’re talking about the extent of capabilities, where whether you were able to want something is irrelevant. There are other cases though.
It's not a fact that the human brain runs on "an algorithm", and assuming it does sneaks in the assumption that it is operating deterministically, which in turn begs a huge question about free will.
I understand that this is about the algorithm but free will is about choosing between good and evil, about "why".
Why could I have something else for breakfast today? Because they were the last strawberries, and my wife really loves strawberries so I could have left them for her. Without a theme, the choice is not really important from the free will perspective, however might be predictable due to recorded habits.