Seth Lloyd has posted a well-written pre-print, proposing a self-administered Turing test for free will but also dealing with some other aspects of the free will debate. Some excerpts:

... the theory of computation implies that even when our decisions arise from a completely deterministic decision-making process, the outcomes of that process can be intrinsically unpredictable, even to – especially to – ourselves. I argue that this intrinsic computational unpredictability of the decision making process is what give rise to our impression that we possess free will.

It is important to note that satisifying the criteria for assigning oneself free will does not imply that one possesses consciousness. Having the capacity for self-reference is a far cry from full self-consciousness...An entity that possesses free will need not be conscious in any human sense of the word.

This paper investigated the role of physical law in problems of free will. I reviewed the argument that the mere introduction of probabilistic behavior through, e.g., quantum mechanics, does not resolve the debate between compatibilists and incompatibilists. By contrast, ideas from computer science such as uncomputability and computational complexity do cast light on a central feature of free will – the inability of deciders to predict their decisions before they have gone through the decision making process. I sketched proofs of the following results. The halting problem implies that we can not even predict in general whether we will arrive at a decision, let alone what the decision will be. If she is part of the universe, Laplace’s demon must fail to predict her own actions. The computational complexity analogue of the halting problem shows that to simulate the decision making process is strictly harder than simply making the decision. If one is a compatibilist, one can regard these results as justifying a central feature of free will. If one is an incompatibilist, one can take them to explain free will’s central illusion that our decisions are not determined beforehand. In either case, it is more efficient to be oneself than to simulate oneself.

The "Turing" test itself consists of the following questions:

Q1: Am I a decider?

Q2: Do I make my decisions using recursive reasoning?

Q3: Can I model and simulate – at least partially – my own behavior and that of other deciders?

Q4: Can I predict my own decisions beforehand?

If you answer Yes, Yes, Yes and No, then you are likely to believe you have free will.


New Comment
9 comments, sorted by Click to highlight new comments since: Today at 4:58 PM

What does question four even mean?

I can predict my own decisions, easily, but that just running the decision-making algorithm twice. I don't think I understand what you're going for here.

I don't know what 'free will' is anymore. Free from what? To do what?

If ... ... then you are likely to believe you have free will.

Well, that is a bit underwhelming, eh? A self-administered test that tells you what you are likely to believe?

Can I predict my own decisions beforehand?

If you give me the problem, I can generally tell you what I'd decide to do, but I basically am making the decision, so it's not technically beforehand. Does that count as yes or no?

For reference: http://lesswrong.com/lw/rc/the_ultimate_source/

"This sensation of freedom occurs when I believe that I can carry out, without interference, each of multiple actions, such that I do not yet know which of them I will take, but I am in the process of judging their consequences according to my emotions and morals."

With the disclaimer

Usually I don't talk about "free will" at all, of course! That would be asking for trouble.

I'd need a definition of "recursive reasoning" and "beforehand".

I'd have just interpreted "recursive reasoning" to mean "taking into account the predicted results of this decision and predicted future decisions", but by that metric any good chess-playing algorithm can pass.

I'd have interpreted "beforehand" to mean "before the sensory data making this decision necessary are experienced", but by that metric I can usually say "yes"; you need a stronger interpretation like "before the sensory data making this decision necessary are all known" before the problem becomes intractable. Yes, I know there are calculations whose results can only be predicted by just running the calculation, but that's (at least metaphorically) what I do: just run the calculation ahead of time.

I'm not sure if this makes me (a compatibilist) a counter-example or not. You did say "YYYN"=>"belief in free will", not the converse.

I think I'm immediately a counterexample, and I would expect most (non-compatibilist) determinists to feel likewise.

. I argue that this intrinsic computational unpredictability of the decision making process is what give rise to our impression that we possess free will.

You should look into David Wolpert (my second such recommendation in a week). He has a paper saying that you can't build a computer in this universe that can successfully simulate the universe. I'd think that would generalize to an individual. Sorry that I can't provide a better reference for the paper.

[-][anonymous]10y00

... the theory of computation implies that even when our decisions arise from a completely deterministic decision-making process, the outcomes of that process can be intrinsically unpredictable, even to – especially to – ourselves. I argue that this intrinsic computational unpredictability of the decision making process is what give rise to our impression that we possess free will.

This is somewhat similar to The Ultimate Source.