I find "free will" to be an anti-useful concept. You can remove it from your vocabulary and you'll never miss it. "Free will", besides being confusing with its varying definitions and historical/religious baggage, pushes us to ask the wrong questions and focus on the wrong things. When someone else uses the concept in conversation/dialogue, ask what they mean by "free will" or why free will matters in the context.
nice. i don't think it's quite enough, though.
we could set up a pid controller to tune itself until it's able to balance an inverted pendulum. this seems to meet your definition. are you comfortable granting such a system 'volition'?
In my essay, I was using volition mostly as just a synonym for choice-making. So, it makes choices on which direction to push the pendulum. But maybe you are asking whether the PID controller "owns" the choices it makes? I would say it owns them more than a PID controller tuned by a human owns its choices, but less than a human owns his/her own choices. The human, after all, can modify how he/she goes about learning in the first place, while the PID controller you have described cannot modify its tuning algorithm.
I don't think calling it a "choice function" really changes the mystery. Is it deterministic (based on brain configuration), or is there some non-physical force that's making it "not deterministic, but not random"?
Personally, I think it's mostly an illusion - it's similar to the temperature setting in LLMs. It's some amount of unpredictability, which may not be true randomness, but which is opaque to any observer due to the complexity of the underlying neurological (or electronic) processes. And there are lots of somewhat-more-introspectable structures which can constrain or influence the behaviors, and which try to explain them as "choices".
I didn't want to include this in the main post, because I wanted to keep it concise and on-topic, but I don't think determinism is relevant. If you were given the options of receiving a million dollars or of receiving death, you would try damn well to make your choice as deterministic as possible. That doesn't stop it from being a choice you make. Likewise, a computer can be given a source of entropy and run a random algorithm; computers need not be deterministic. Stochasticity doesn't magically give you any more "real" choices than you had before.
I think people get the idea into their head about stochasticity being necessary for choosing because they consider a real choice to require the possibility to choose otherwise. So, they imagine a choice function like a brain split in two, sometimes choosing one way and sometimes choosing another:
However, I think this is the wrong model to have for the possibility of choosing otherwise. Instead, you should imagine that a different choice function in your place might choose a different action:
This solves the problem of capacity to choose otherwise without requiring stochasticity. That a choice is made is just a way of pointing out there is some choice function, a chooser, and that there exists a different chooser that would result in a different action. That it is one chooser and not another determining the action is the whole point of saying that that chooser, and not the other made the choice. In how I view choosing, choices are still made even if the universe is deterministic, it's just that the choosers are determined beforehand in what places they will be. But that doesn't make the concept of a chooser or a choice useless, no less than abstracting away a clump of particles as a rock is useless. We abstract clumps of particles as rocks because we can model rocks more simply than modelling all the particles one by one. We abstract away choosers because there are choosers, entities such as humans that take actions based on information they gather.
On the other hand, the question of whether someone is to be held responsible for their choices is a social problem.
So how is "choice function" different from "free will" in any significant externally-visible way? Both of them take information and brain state as inputs and an action as output. The concept of both includes counterfactual "path not taken" as meaningfully possible.
What's the actual distinction that makes is a "choice function" rather than "free will"?
I think I would need you to explain what you mean by free will for me to be able to answer that.
Free will is the thing that makes choices, among different "possible" actions. Or at least the thing that feels like it makes choices, as far as the chooser can tell.
There's a lot to unpack there in how and whether your brain makes choices, or if it just does what it's configured to do and that feels like a choice to the qualia-experiencer. Whether you call it "will" or "choice function", it's a mystery how a physical process (your brain) can have nondeterministic outputs.
it just does what it's configured to do
Yes. It does what it's configured to do, and nothing special beyond that. What it's configured to do is learning and choosing.
that feels like a choice to the qualia-experiencer
I think when people talk about the qualia of choice-making, they are talking about the experience of making a certain kind of high-level choice, which involves consciously gathering information, such as by looking around or dredging up memories, and then committing to an action. But notice that this involves several lower level choices in the process: What information do you pay attention to? Which memories do you recollect? What self-modifications does committing to an action involve? These lower-level choices may lack qualia, because you are not consciously aware you are making those choices. They could be just instinctual or trained like muscle memory.
I imagine the qualia of such a high-level choice to be similar to the qualia of running, a combination of smaller experiences of actions like legs extending, arms pumping, lungs breathing, muscles aching, and so on. But other people may focus on just the experience of committing to a course of action, and call that the point at which a choice is made. However, in my opinion, the qualia you get there is actually the qualia of carrying out an action, the action of self-modifying to keep a commitment, not the qualia of making the low-level choice itself for that commitment.
The amazing thing about brains is that we can break up high level decisions down into smaller pieces which involve decisions themselves, and this process doesn't really seem to have a limit for how high-level the decisions can go. How that works is a question for psychology, neuroscience, and AI.
it's a mystery how a physical process (your brain) can have nondeterministic outputs.
I can program a computer to use a random algorithm. There is nothing mysterious about nondeterministic outputs. It is, though, an interesting question whether true randomness exists, or whether everything that appears random is just chaos.
Yes. It does what it's configured to do, and nothing special beyond that. What it's configured to do is learning and choosing.
No, I mean "each choice is not a choice, it's just following the configuration". "learning" is across time, and is about changes in configuration. But at each choice-point, there is no actual choice.
I can program a computer to use a random algorithm
Well, no. You can program it to use pseudorandom data in an algorithm, or even "hardware-random", which isn't necessarily random, just unpredictable by humans.
No, I mean "each choice is not a choice, it's just following the configuration". "learning" is across time, and is about changes in configuration. But at each choice-point, there is no actual choice.
That depends on what you mean by "actual choice". From a mechanistic definition of choice, there is a choice: An action was output based on input information. I don't know what a sensible definition of a choice looks like other than this. I also don't understand what keeps this from being an "actual" choice. Is it that you feel like it's not really you making the choice, if it's just your brain running an algorithm? But you are that algorithm running on that brain. It is you who is making the choice.
Well, no. You can program it to use pseudorandom data in an algorithm, or even "hardware-random", which isn't necessarily random, just unpredictable by humans. ―Dagon
Compare what you said with what I said:
It is, though, an interesting question whether true randomness exists, or whether everything that appears random is just chaos. ―joseph_c
I am not claiming that true randomness necessarily exists, just that I can program a computer to use a random algorithm, so nondeterminism isn't a mysterious question: Just supply true randomness to a random algorithm.
An action was output based on input information
What makes it a choice in most conversations is the idea that it COULD HAVE output a different action. It's a choice among possibilities, not just a single-output function.
If you're asserting that there is no choice (in the usual sense), then I think I understand and agree, but that wasn't obvious from my reading.
Suppose you have a neural network which classifies handwritten digits. It has 10 outputs for 10 logits, and its "choice" is the highest logit it outputs, given an input image. This is what I mean by a choice.
I don't understand what it means to say an entity "COULD HAVE" output a different action than what they outputted, other than talking about either (1) a stochastic function, or (2) a claim that there exists a hypothetical other function which would have output something else. I think the second case is the right picture to have in your head, but you seem to have some other picture in your head. Would you mind explaining it?
There is a quantum interpretation of free will: every decision is a fork in the multiverse. The path is truly random, all possible paths exist in the multiverse.
It's easy enough to avoid the phrase "free will" , but avoiding the concept is harder to avoid, not least because it's actually several concepts.
Compatibilist free will is the lowest bar to clear. Almost any mechanism of choice would amount to CFW. So it's not controversial apart from whether it's what we centrally mean by free will.
Libertarian free will involves an addional ingredient , leeway or the ability to have done The ability to have done otherwise.doesn't seem possible in a physically determined universe , leading to the worry that free will is a supernaturalistic process where an immaterial soul overrides the physical causality in the brain. Supernatural libertarian free will is easily refuted by naturalism.
That leaves naturalistic libertarian free will as the controversial case. Naturalistic free will is a somewhat overlooked option. Science minded types are inclined to class FW as a "religious concept" , but it isn't because it isn't a single concept. Naturalistic and supernatural concepts come under the label "free will" and a naturalistic concept could coincide with what is intended to be merely an account of the mechanism of choice ,
Philosophers don't have much to say about the nature of the capacity to choose, but then it's not what's controversial. What's controversial is the ability to have done otherwise --which is itself controversially linked to moral responsibility.
The ability to have done otherwise is easily possible in a undetermined universe , but these models have a series of worries about control and purposiveness.
Self modification doesn't give you any CHDO at all -- it's quite compatible with determinism. In a deterministic universe, the progress of a self modifying mechanism is as determined as anything else.
But the mechanism could have an indeterministic element, in which case it coincides with libertarian free will. The right sort of mechanism could even resolve the worries about control and purpose.
ETA:
I didn’t want to include this in the main post, because I wanted to keep it concise and on-topic, but I don’t think determinism is relevant. If you were given the options of receiving a million dollars or of receiving death, you would try damn well to make your choice as deterministic as possible.
That's one kind of case -- where you are making a decision for personal benefit, and it's very clear which way to go. There are also torn decisions , where you have desired in both directions, or your desired conflict with external morality, etc.
However, I think this is the wrong model to have for the possibility of choosing otherwise. Instead, you should imagine that a different choice function in your place might choose a different action:
But do you know that? Surely establishing how the capacity for choice actually works requires empirical investigation.
This solves the problem of capacity to choose otherwise without requiring stochasticity. That a choice is made is just a way of pointing out there is some choice function, a chooser, and that there exists a different chooser that would result in a different action
But that's only a logical CHDO. Under the circumstances, an agent can only make one choice, under your model. The agent has no power to choose the world they are in. What is the point of scuba CHDO? The point of compatibilist CHDO is the agent is not compelled; the point of libertarian CHDO is that the agent can steer towards a future of their choosing.
That doesn’t stop it from being a choice you make
It could stop it from having certain characteristics beyond being a choice.
Undetermined choices are more momentous , because an open, non-inevitable future depends on them.
Determinism allows you to cause the future in a limited sense. Under determinism, events still need to be caused,and your (determined) actions can be part of the cause of a future state that is itself determined, that has probability 1.0. Determinism allows you to cause the future ,but it doesn't allow you to control the future in any sense other than causing it. (and the sense in which you are causing the future is just the sense in which any future state depends on causes in he past -- it is nothing special and nothing different from physical causation). It allows, in a purely theoretical sense "if I had made choice b instead of choice a, then future B would have happened instead of future A" ... but without the ability to have actually chosen b.
Under determinism, you are a link in a deterministic chain that leads to a future state, so without you, the state will not happen ... not that you have any choose use in the matter. You can't stop or change the future because you can't fail to make your choices, or make them differently. You can't anything of your own, since everything about you and your choices was determined by at the time of the Big Bang. Under determinism , you are nothing special...only the BB is special.
(This is still true under many worlds. even though MWI implies that there is not a single inevitable future, it doesn't allow you to influence the future in a way that makes future A more likely than future B , as a result of some choice you make now. Under MW determinism, the probabilities of A and B are what they are, and always were -- before you make a decision, after you make a decision , and before you were born. You can't choosee between them, even in the sense of adjusting the probabilities).
By contrast, Libertarian free will does allow the future to depend on decisions which are not themselves determined. That means there are valid statements of the form "if I had made choice b instead of choice a, then future B would have happened instead of future A". And you actually could have made choice a or choice b....these are real possibilities, not merely conceptual or logical ones. That in turn means that the future is not inevitable, and can be shaped, but merely caused...a free agent can create or steer towards a variety of futures. For a free agent, doom does not have to be inevitable.
It's like the difference between a car and a train. The train goes somewhere but it can't jump off the tracks
In fact, determinists don't even need the conditionals. Under determinism, you can think of sets of pre-existing agents, which make different decisions, or adopt different strategies determinstically, and you can make claims about what results they get, without any of them deciding anything or doing anything differently. That additional, non-redundant, sense of control is what would have been required to answer the concern that libertarians actually have about what determinism robs them of.
The situation is rather analogous to simulationism: a simulated universe might seem just like a real universe...but it isnt real. And a deterministic universe might seem to contain decisions and actions...but they are not decisions and actions in the fullest senses of the terms, because they don't make a difference. So there is precedent for saying that two things can be different without being visibly different.
Almost everyone, including rationalists, implicitly believe they have the ability to control the future,to steer to better futures. In the case of rationalists, that is the motivation for AI safety and effective altruism.
Before I begin my response, I would like to point out that I really don't appreciate your commenting style. I spent a while working on my post, and you didn't even bother to engage with my first paragraph, where I say that capacity to choose otherwise is asking the wrong question. If you think it is the right question to be talking about, you should argue for that, not just assume it is something we can agree on.
When I talk about choosing, I am talking about modelling the world, not some magical and mysterious capacity that non-choosers lack. A chooser is defined simply to be an entity that takes in information and outputs actions. Choosers are a useful abstraction because such things exist: Many entities perform more optimal actions by using information they possess. But there is no need to hypothesize an elan volonté for how this all works.
Now, to talk a bit more about the content of your comment...
But do you know that? Surely establishing how the capacity for choice actually works requires empirical investigation.
You claim that there is some "capacity to choose", but whence comes this idea of a capacity to choose? I fear that you just have an intuition that "you make your own choices", and then you take this intuition too far, saying that no choice can be "real" if it has a causal or physical link to the rest of reality.
I say that choice-making is the process of converting information into actions. You are the one who says it requires some special capacity to choose. You are the one who claims there is anything to talk about there, at all.
Undetermined choices are more momentous... it is nothing special... you are nothing special... only the BB [Big Bang] is special...
Libertarian free will... [not] merely caused... doom [not] inevitable... [sense of control] determinism robs them of.
It doesn't matter if an undetermined choice is more "momentous"; that doesn't make it more true. I don't even grant that undetermined choices are more momentous than determined choices. Our future depends on our choices, whether they are deterministic or not.
I do grant, though, that it would be more important to study choice-making if choice-making was more momentous.
It's like the difference between a car and a train. The train goes somewhere but it can't jump off the tracks
And both are controlled by people.
I agree that there are varying levels of ownership we can assign to entities that make choices. The self-tuned PID controller has more ownership over its choices than the human-tuned PID controller, but less than a human has over his/her own choices. The error you are making is in only accepting two extremes: (1) Choices are completely deterministic, owned completely by the universe as a whole, and (2) choices follow from "free will", owned completely by the agent with free will.
Also, I think it is important to distinguish modelling the world from social responsibility. Whether someone is to be held responsible for their choices is a social problem, not a fact of the universe. We can model a computer as making choices without holding it responsible, and instead holding its programmer responsible.
a simulated universe might seem just like a real universe...but it isnt real.
When you run a computer program, is it less "real" if it is written in an interpreted language than if it is written in a compiled language? (An interpreted language works by simulating a virtual computer which executes higher level instructions using the base computer, while a compiled language directly outputs instructions to be used on the base machine. This is, of course, a great simplification of how interpreters work.) Does "real" to you just mean that you have reached the limit of your understanding? I wonder what you think a physicist would say if you asked him whether an electron is "real". He, after all, understands that an electron is a theoretical model, not a hard fact of existence.
Instead of seeking to apply labels like "real" or "unreal", I think you would be better served by looking at what is actually happening.
Before I begin my response, I would like to point out that I really don’t appreciate your commenting style. I spent a while working on my post, and you didn’t even bother to engage with my first paragraph, where I say that capacity to choose otherwise is asking the wrong question.
Funnily enough, I could say something similar. I think you missed that I am talking about naturalistic models of free will, and how they can coincide with mechanistic accounts of choice ... And that I am not saying that any particular model is a really true.
Turning back to the first paragraph of your previous:-
People often use free will to explain how we make choices, but have great difficulty explaining how free will itself works. Philosophers gesture towards ideas like “the capacity to choose” or “the freedom to do otherwise”, but these concepts just raise the same question to me: What are “capacity” and “freedom”? I suspect that the reason free will is so hard to explain is because it is not actually clarifying anything. It’s a fake explanation, like the physics textbook that says everything runs on energy.[1] “What makes the bicycle move?” Energy! “How do we make choices?” Free will! It doesn’t really answer anything.
That's not unconditionally true, it's only true if FW is put forward as a black box where the explanation stops. But it's possible to have a white box model of FW, one with moving parts. You can have both things -- the moving parts/comprehensibility , and the Freedom to do Otherwise.
Free will isn't always a Mysterious Answer, it depends on the philosopher. Naturalistic libertarians like Robert Kane and Tony Doyle have proposed mechanisms
If you think it is the right question to be talking about, you should argue for that, not just assume it is something we can agree on. When I talk about choosing, I am talking about modelling the world, not some magical and mysterious capacity that non-choosers lack.
I am not talking about anything mysterious or magical. What I said was:-
"That leaves naturalistic libertarian free will as the controversial case"
Note the emphasis on naturalistic!
A chooser is defined simply to be an entity that takes in information and outputs actions. Choosers are a useful abstraction because such things exist: Many entities perform more optimal actions by using information they possess. But there is no need to hypothesize an elan volonté for how this all works
I'm not.
But do you know that? Surely establishing how the capacity for choice actually works requires empirical investigation.
You appear not to have answered that question.
You claim that there is some “capacity to choose”, but whence comes this idea of a capacity to choose? I fear that you just have an intuition that “you make your own choices”, and then you take this intuition too far, saying that no choice can be “real” if it has a causal or physical link to the rest of reality.
No. I haven't defined libertarian free will that way.
It is defined as not being fully determined. It is not defined as total disconnection from reality. Why would anyone believe it if it were?
I say that choice-making is the process of converting information into actions.
So do I. One of my points is that mechanistc choice mechanisms can coincide with naturalistic , unmysterious, free will.
You are the one who says it requires some special capacity to choose.
Am I, though? What does special even mean? I am putting forward a theoretical model that is naturalistic, explcable, and testable. The only difference from your model is the inclusion of indeterminism
You are the one who claims there is anything to talk about there, at all. Undetermined choices are more momentous… it is nothing special… you are nothing special… only the BB [Big Bang] is special.
Special, meaning it's the only uncaused cause.
It doesn’t matter if an undetermined choice is more “momentous”;
Momentous things matter more than trivial thungs by definition. Of course, it helps if they are real as well.
that doesn’t make it more true. I don’t even grant that undetermined choices are more momentous than determined choices. Our future depends on our choices, whether they are deterministic or not.
But not to the same extent , as I argued at length.
agree that there are varying levels of ownership we can assign to entities that make choices. The self-tuned PID controller has more ownership over its choices than the human-tuned PID controller, but less than a human has over his/her own choices. The error you are making is in only accepting two extremes: (1) Choices are completely deterministic, owned completely by the universe as a whole, and (2) choices follow from “free will”, owned completely by the agent with free will.
I don't believe in your 2, and never said anything along those lines.
Also, I think it is important to distinguish modelling the world from social responsibility.
Both ways! An inward firing account of choice, the brain mechanisms that enable it, does not answer all the questions, because some of them are outward firing questions about how people interact and societies work
Whether someone is to be held responsible for their choices is a social problem, not a fact of the universe.
It is of course both. You have no moral responsibility alone on a desert island; but also none if you live in a society but don't have the appropriate brain functions
We can model a computer as making choices without holding it responsible,
Yes, but you can also have a richer model as well.
Taking the wider, humanistic concerns into account doesn't have to place you and n "woo" territory.
and instead holding its programmer responsible. a simulated universe might seem just like a real universe...but it isnt real. When you run a computer program, is it less “real” if it is written in an interpreted language than if it is written in a compiled language?
Potentially, yes, because an interpreter can include a sandbox , that prevents the code from doing what it thinks it's doing.
(An interpreted language works by simulating a virtual computer which executes higher level instructions using the base computer, while a compiled language directly outputs instructions to be used on the base machine. This is, of course, a great simplification of how interpreters work.)
Well, I think understand that because I wrote my first interpreted program 50 years ago and my first compiled program 40 years ago.
Does “real” to you just mean that you have reached the limit of your understanding?
No, I didn't say anything like that.
I wonder what you think a physicist would say if you asked him whether an electron is “real”. He, after all, understands that an electron is a theoretical model, not a hard fact of existence.
Few physicists would agree.
Instead of seeking to apply labels like “real” or “unreal”, I think you would be better served by looking at what is actually happening.
I actually believe in empiricism.
(Me: "But do you know that? Surely establishing how the capacity for choice actually works requires empirical investigation")
And explanation.
But I don't have a fully equipped neurology lab, so I can't test my testable model of free will.
I assume you're in the same position.
What were you expecting me to do, that I'm in a position to do, and yet not doing? And, are you doing it yourself?
However, I think this is the wrong model to have for the possibility of choosing otherwise. Instead, you should imagine that a different choice function in your place might choose a different action
But do you know that? Surely establishing how the capacity for choice actually works requires empirical investigation.
You appear not to have answered that question.
I was using the word "model" to mean "picture in your head", not "theorized laws which capacity for choice follow". I was arguing that it was an error to define capacity for choice in a split-brain way, and that you should instead define it in a substitution-brain way. I didn't answer your question because it wasn't really engaging with what I had said. It's probably my fault for being sloppy in my word choice.
I agree that establishing how choosing actually works in brains requires empirical investigation.
The only difference from your model is the inclusion of indeterminism
My model was agnostic about indeterminism, and still is.
Special, meaning it's the only uncaused cause.
You're sure using special to mean more than that later on!
Momentous things matter more than trivial thungs by definition.
If you read the second part of my sentence after the semicolon, and the sentence after that paragraph, it is clear that I am just saying that being more momentous doesn't affect the truth value of a proposition, not that momentous things matter less.
But not to the same extent , as I argued at length.
Not really. You mostly just repeated "more momentous" in various ways, such as more real, more special, and so on. If you want me to care more about choices in the world where choices are non-deterministic than the world where choices are deterministic, I think you would have to tell me something about how that would change the world. For example, perhaps if choices are non-deterministic, we can reduce crime rates by studying psychology, whereas if choices are deterministic, crime rates will never go down no matter how much psychology we as a civilization learn. Therefore, in the first world, it's useful to study psychology, while in the second world it is not. I don't believe that is the case, though. But that's the kind of argument that would get me to care one way or the other.
You do mention something about the world being doomed/not necessarily doomed, but again, I don't think that actually is a matter of determinism or not. At some point in the future, the world will either be doomed or not doomed, and our actions will decide which is the case. We should therefore take actions to avoid the doomed world, and our brains are configured to follow this precept in either world. Adding true randomness back in just makes it so that there is now aleatoric uncertainty about the future, not simply epistemic uncertainty.
But you seem to be arguing for something more than just "a brain which uses a truly random source of entropy to run a random algorithm". I don't want to comment on it, because clearly I don't understand.
I don't believe in your 2, and never said anything along those lines.
Sorry.
An inward firing account of choice, the brain mechanisms that enable it, does not answer all the questions, because some of them are outward firing questions about how people interact and societies work
I feel like these are separate areas that can be solved individually. The first area is a question of modelling the brain: How does the brain learn? How does it map information to better than random actions? The second area is a question of social responsibility: As a society, when should we hold people responsibility so that we can accomplish X? What system of rewards and punishments helps us develop a healthy society?
The first area is about psychology, neuroscience, and AI. The second area is about ethics, economics, and game theory. It's true that answers in the first area can inform us about answers in the second area, but I don't see the case for the other direction.
It is of course both. You have no moral responsibility alone on a desert island; but also none if you live in a society but don't have the appropriate brain functions
I disagree. I think it is not both. Who we choose to hold responsible and for what actions is solely a social problem, though of course we should use material facts and mathematical laws to inform us of how to do that well.
Yes, but you can also have a richer model as well.
Richer than what? I haven't proposed any model of social responsibility.
Potentially, yes, because an interpreter can include a sandbox , that prevents the code from doing what it thinks it's doing.
This is an interesting perspective. I personally take the perspective that two things are identical if you can't distinguish between them, so code running in a sandbox is just as real to the program as when it is not in a sandbox, as long as the sandbox faithfully simulates interacting with the outside world. Nevertheless, to an outside observer, these situations do not look identical, so there is a sense in which the program running in the sandbox is less real to the outside observer. But I think this stretches the meanings of "real" and "unreal" beyond their typical use-case of distinguishing between ideas and the material world or between truth and fabrication. It's probably more useful to coin a new term like "root" or "base" to distinguish between a simulated universe and a universe not running in a simulation.
I actually believe in empiricism.
I wasn't trying to say you don't believe in empiricism! I was trying to argue that it would be more pragmatic to simply describe the world instead of worrying about whether the label "real" or "unreal" applies.
But I don't have a fully equipped neurology lab, so I can't test my testable model of free will.
You mentioned libertarian free will, which according to Google says that human beings can make choices undetermined by prior causes, physical laws, or divine predetermination. Would brains taking advantage of true randomness in the universe count, provided it exists? Or does there need to be some special substance which is only used in choice-making? Or am I just completely missing the mark?
I don't really find the distinction between "a brain uses a random algorithm which takes advantage of true randomness" and "a brain uses a random algorithm which uses chaos for its 'entropy'" very important for understanding how choice-making works.
I would be very interested in hearing a test you could perform to validate/invalidate your model of free will. I'm still not really sure what you mean by free will, see, and I think that would help a lot.
I assume you're in the same position.
Pretty much.
What were you expecting me to do, that I'm in a position to do, and yet not doing? And, are you doing it yourself?
Right now I would mostly appreciate you explaining what your model of free will is.
People often use free will to explain how we make choices, but have great difficulty explaining how free will itself works. Philosophers gesture towards ideas like "the capacity to choose" or "the freedom to do otherwise", but these concepts just raise the same question to me: What are "capacity" and "freedom"? I suspect that the reason free will is so hard to explain is because it is not actually clarifying anything. It's a fake explanation, like the physics textbook that says everything runs on energy.[1] "What makes the bicycle move?" Energy! "How do we make choices?" Free will! It doesn't really answer anything.
However, most people do feel like they make choices. I think making choices is a real phenomenon, just that the answer isn't "free will". When I want to answer the question of how we make choices, I look at the process by which it occurs. For me, that consists of gathering information to determine the best action, and then performing that action. If you look at it from a purely mechanistic lens, choice-making is simply following some function from information to actions:
From this perspective, many things make choices that we might not traditionally consider to have free will. For example, my computer chooses which threads to schedule on which processors.
Nevertheless, there is definitely a difference between a computer and myself. Someone programmed the computer to make its choices, but I make my own choices. What explains the difference between us?
I think it's simply a matter of self-reference and bootstrapping. The computer doesn't write its own code (at least, not yet), but we modify our own choice functions. Namely, given certain kinds of information—such as that an action led to a suboptimal outcome—our choice functions output actions to modify themselves. When the choice function is sufficiently clever, it can even realize that these updates may be suboptimal, and bootstrap itself to an even cleverer choice function.
Eventually, the choice function is determined mostly by the modifications it has done to itself, and it starts to make sense to say the choice function is responsible for its own choices. I think this is the answer to the mystery behind the sense of ownership we have of our actions.
Volition doesn't have to be mysterious.
"Judging Books by Their Covers," Surely You're Joking, Mr. Feynman!