I'd like to better understand how compatibilists conceive of free will.[1] LW is a known hotbed of compatibilism, so here's my question:

Suppose that determinism is true. When I face a binary choice,[2] there are two relevantly-different states of the world I could be in:[3]

State A: Past events HA have happened, current state of the world is A, I will choose CA, future FA will happen.

State B: Past events HB have happened, current state of the world is B, I will choose CB, future FB will happen.

When I make my choice (CA or CB), I'm choosing/revealing which of those two states of the world are (my) reality. They're package deals: CA follows from HA just as surely as it leads to FA, and the same holds for state B. 

Which seems to give me just as much control[4] over the past as I have over the future. In whatever sense I 'exercise free will' to make CA real and bring about FA, I also make it the case that HA is the true history.

My question is: Does this bother you at all, and if not, why not?[5]

  1. ^

    Yes, I've done my own reading, though admittedly it's been a while. I never found a satisfying (to me) answer to this question, and to the best of my recollection I rarely saw it clearly addressed in a form I recognised. If you want to link me to a pre-existing answer, please do, but please be specific: less 'read Dennett' and more 'read this passage of this work'.

  2. ^

    Maybe no real choice is truly binary, but for the sake of simplicity let's say this one is. I don't think that changes anything important.

  3. ^

    For simplicity I'm taking the physical laws as a given. I don't think that matters unless free will involves in some sense choosing which set of physical laws holds in reality.

  4. ^

    Not necessarily in every sense in which you might want to use the word 'control'; you might define that word such that it only applies to causal influence forward in time. But yes in the sense that whatever I can do to make my world the one with FA in it, I can do to make my world the one with HA in it.

  5. ^

    If your answer involves the MWI or something like it, I would appreciate if you explained (the relevant bits of) how you conceive of personal identity and consciousness within that framework.

New to LessWrong?

New Answer
New Comment

9 Answers sorted by


May 05, 2023


It does not bother me at all, since it doesn't actually address any of the factors that are relevant to my compatibilist position on free will.

The first part to understand is that I see the term "free will" as having a whole range of different shades of meaning. Most of these involve questions of corrigibility, adaptability, predictability, moral responsibility, and so on. Many of these shades of meaning are related to each other. Most of them are compatible with determinism, which is why I would describe my position as mostly compatibilist.

The description given in this post doesn't appear to be related to any of these, but with mere physical correlation in a toy universe simplified beyond the point of recognizability or relevance. Further questions would need to be answered in order to even begin to consider whether the agent in this post's question has "free will" in any of the relevant senses. For example:

  • To what extent does the agent know the relation between the H's, C's and F's?
  • Would the deciding agent perceive HA and HB as being identical up the the point of decision?
  • Is it the same agent making the decision in universes HA and HB?
  • What basis for judgement is used for the preceding answer?

In a fairly "central" example, my expectation would be:

  • The agent does not know these relations;
  • That the agent does perceive HA and HB as being identical;
  • That in most important respects the agents are considered to be "the same", by some sort of criterion such as:
  • They themselves would recognize each other's memories, personalities, and past decisions as being essentially "their own". (They may diverge in future)

In this case I would say that this agent (singular, due to the third answer) has free will in most important respects (mostly due to answer 2 but also somewhat due to 1),  can be said to choose CA or CB, influences FA or FB but does not choose them, and likewise does not choose HA or HB.

If you have different answers to those questions, my answers and the reasons behind them may change.


In what sense do they influence the inevitable future (FA or FB)?

Thanks. One clarifying question: When you say that the agent "can be said to choose CA or CB, influences FA or FB but does not choose them, and likewise does not choose HA or HB", do you mean that they influence but do not choose HA or HB, or that they neither influence nor choose HA or HB? (My guess is the latter, because you would restrict 'influence' to forward-in-time causation, but I want to make sure I'm not misunderstanding.)

I think the reason my little scenario seems irrelevant to you is related to disagreement over this:

I see the term "free will"

... (read more)
I would say that they neither choose nor influence HA and HB, assuming that the universe in question follows some sort of temporal-causal model. Non-causal universes or those in which causality does not follow a temporal ordering are much more annoying to deal with and most people don't have them in mind when talking about free will, so I wouldn't include them in exploration of a more 'central' meaning. However, there is some literature in which the concept of free will in universes with other types of determinism is discussed. I distinguish between "influence" and "choice" since answer 1 posited that the relationship between the various parts of the universe wasn't known to the agent. The agent does not know that future Fx follows choice Cx nor that Cx follows from past Hx, and by answer 2 does not even know the difference between HA and HB. If FA includes some particular outcome OA that causally follows from CA but isn't in FB, and the agent choosing CA does not know that, then I would not say that the agent chose OA. They chose CA, which influenced OA. There are lots of different ways to address different forms of "ability to do otherwise", each of which is useful and relevant to different questions about free will, and so they all lead to different shades of meaning for "free will" even including nothing more than what you've just said. However, different people communicate different explicit and implicit assumptions about what "free will" means in their communication, and so necessarily mean somewhat different things by the term. Each of the aspects I mentioned in my post come from multiple respected writers on the subject of free will. So no, it's not a redefinition. It's a recognition that the meaning of the term in practice varies with person and context, and that it doesn't so much have a single meaning as a collection of related meanings. From long experience, proposing a much more specific definition is one of the surest ways to end up squabbling point


May 06, 2023


I believe that Eliezer's analysis of "free will" answers your question. Free will (he says) is neither outside of an otherwise lawful universe, nor incompatible with a lawful universe, nor merely compatible with a lawful universe, but requires a lawful universe. He dubs this position "requiredism".

I find this not merely convincing, but obviously right. What do you think?

This is somewhat dated in the sense that LW-style decision theory later converged on treating agents-that-make-decisions as abstract algorithms rather than as their instances embedded in the world, see discussion of "algorithm" axis of classifying decision theories in this post.

With TAG, I don't see what their decision theory has to do with the matter. Whatever their decision theory, it is impotent to achieve anything unless their physical instances embedded in the world are able to physically act in the world to achieve their aims, which is the thing that incompatibilists deny.
The point is about the frame of Yudkowsky's explanation, where "you are physics" instead of "you are an algorithm". The latter seems convergently more useful for decision theory of embedded agents, which can be predicted by other agents or can have multiple copies. So this doesn't concern some prior meaning of "free will", it motivates caring about a notion of free will that has to do with abstract computations of agent's decisions rather than agent instances embedded in the world.
You are an algorithm embedded in physics. You are not any of the other people executing this algorithm, you are this one. Conducting yourself according to these decision theories still causes the physical actions only of this one, and is only acausally connected to the others of which these theories speak. Deciding as if deciding for all is different from causally deciding for all.
There is an algorithm and the person executing the algorithm, different entities. Being the algorithm, you are not the person executing it. The algorithm is channeled by the person instantiating it concretely (in full detail) as well as other people who might be channeling approximations to it, for example only getting to know that the algorithm's computation satisfies some specification instead of knowing everything that goes on in its computation. The use of "you are the algorithm" frame is noticing that other instances and predictors of the same algorithm have the same claim to consequences of its behaviors, there is no preferred instance. The actions of the other instances and of the predictors, if they take place in the world, are equally "physical" as those of the putative "primary" instance. As an algorithm, you are acausally connected to all instances inclusing the "primary" instance in the same sense, by their reasoning about you-the-algorithm. I don't know what "causally deciding" means for algorithms. Deciding as if deciding for all is actually an interesting detail, it's possible to consider its variants where you are only deciding for some, and that stipulation creates different decision problems depending on the collection of instances that are to be controlled by a given decision (a subset of all instances that could be controlled). This can be used to set up coalitions of interventions that the algorithm coordinates. The algorithm instances that are left out of such decision problems are left with no guidance, which is analogous to them failing to compute the specification (predict/compute/prove an action-relevant property of algorithm's behavior), a normal occurrence. It also illustrates that the instances should be ready to pick up the slack when the algorithm becomes unobservable.
You still haven't said why it motivates that. Even if you are not talking about a prior definition of free will, why does your novel definition have to do with algorithms?
Why would decision theory determine the nature of free will? I would have though it was the other way round: free will has implications for what decisions are.

Requiredism holds that determinism is an advantage to fee will because the connection between a decision and the resulting action is deterministic. Randomness, or at least, too much randomness in the wrong place, would prevent me from acting reliably on my decisions Of course, determinism also removes the elbow room, the ability to have decided differently, that is of such concern to libertarians. Determinism is only an overall advantage to free will if elbow room is unimportant or impossible, so Requiredism needs compatibilism as a starting point.

I don't... (read more)

If 'lawful' just means 'not completely random' then I agree. But I've never been convinced that there's no conceivable third option beside 'random' and 'deterministic'. Setting aside whether there's a non-negligible chance that it's true, do you think the idea that consciousness plays some mysterious-to-us causal role -- one that isn't accurately captured by our concepts of randomness or determinism -- is definitely incoherent?

Consciousness does play a mysterious-to-us-today causal role. It is mysterious, in that no-one has yet explained how there can possibly be such a thing as subjective experience, yet there it is. Perhaps someone might explain it in the future, but no-one has done so today. And it must be causal, not epiphenomenal, because the doctrine of epiphenomenalism just adds another layer of mysteriousness on top of that one, explaining the obscure by the more obscure. Epiphenomenalism is no more coherent an idea than p-zombies. Randomness vs. determinism is a red herring. The universe has to be lawful, for us to be able to direct it into desired configurations. Randomness, such as some current theories of quantum mechanics say are physically unreducible to determinism, is an obstacle to doing that, but has no more significance than that. That goes for chaos as well, which some put forward as a "third alternative" to randomness and determinism. But none of these matter for this view of what "free will" is. I recommend that people do click through to the article by Eliezer that I linked before, if they haven't already. It's not very long, and any précis I could write would just be a repetition of it. Epiphenomenalism, btw, is described by the first diagram in that article.
I don't follow this. Adding another layer of mysteriousness might not make for a satisfying explanation, but why must it be false? (I also think the p-zombie is a perfectly coherent idea, for whatever that's worth.)
When I say "must" I'm rounding to zero probabilities so negligible that they should not even come to my attention. Epiphenomenalism has consciousness be a real thing (that is what it is a theory of) but which has only a one-way connection to the rest of the universe, like a redundant gear in a clock that is not part of the train that drives the hands. Nowhere else do we see such a thing; in fact, by definition, we could not. The hypothesis is doing no work. And I see p-zombies as another incoherent idea.
I think the second clause implies that our not seeing it anywhere else provides no evidence. (Just for the obvious Bayesian reason.) I'm not sure why it has to. The 'consciousness is real' part isn't a hypothesis; it's the one thing we can safely take as an axiom. And the 'consciousness doesn't affect anything else' part is as reasonable a candidate for the null hypothesis as any other, as far as I can tell. Where does your prior against redundant gears come from?
What would legitimately draw the hypothesis to our attention? One of the things that we have experience of is being able to act in the world. Epiphenomenalism says that we do not act in the world, we are merely passengers without the power to so much as twitch our little fingers. This is so plainly absurd that only a philosopher could take it seriously, but as Cicero remarked more than two thousand years ago, no statement is too absurd for some philosophers to make.
I've been trying to articulate why I find it hard to reconcile this with your endorsing Eliezer's requiredism, and this is the best I can do: I don't think I see a meaningful difference between epiphenomenalism (i.e. brain causes qualia, qualia don't cause anything) and a non-eliminative materialism that says 'qualia and brain are not separate things; there's just matter, and sometimes matter has feelings, but the matter that has feelings still obeys ordinary physical laws'. In both cases, qualia are real but physics is causally closed and there's no room for libertarian free will.  If the quoted passage referred to that kind of materialism rather than to epiphenomenalism, it would be an argument for libertarianism. And I know that's not what you intended, but I don't fully understand what you do mean by it, given that it must not conflict with requiredism (which is basically 'compatibilism but more so').
Libertarian FW isn't ruled out by the causal closure of the physical, it's ruled out by determinism (physical or not). Causal closure would rule out something like interactionist dualism, but that's fairly orthogonal to LFW...it could even be deterministic.
I don't see a difference between that argument and saying that a jumbo jet doesn't cause anything, only its atoms do.
Happy to leave this here if you've had enough, but if you do want a response I'll need more than that to go on. I've been struggling to understand how your position fits together, and that doesn't really help. (I'm not even sure exactly what you're referring to as 'that argument'. Admittedly I am tired; I'll take a break now.)
The fact that subjective experience exists and we haven't been able to figure out any causal role that it plays, other than that which seems to be explicable by ordinary physics (and with reference only to its ordinary physical correlates).
We have also not figured out how the physical brain does the things that we do.
Agreed, and that's part of why I see mysterious libertarian free will as not having been ruled out.

clone of saturn

May 06, 2023


It doesn't bother me, because I'm me, with the propensity to make the choices I'm determined to make. If I had chosen otherwise, I would not be me.

Suppose I love chocolate ice cream and hate vanilla ice cream. When I choose to eat chocolate ice cream, it's an expression of the fact that I prefer chocolate ice cream. I have free will in the sense that if I preferred vanilla instead, I could have chosen vanilla, but in fact I prefer chocolate so I won't choose vanilla.

Seth Herd

May 06, 2023


I think what you may be seeing on LW is a reluctance to use the term "free will". I hope it is, since I think it's a terribly confusing term. I don't think "free will" is a coherent concept in an intuitive definition of the phrase. What would such a thing mean, and would you want what you've defined?

I think what people are usually thinking of as "free will" is better called self-determination; the ability to determine one's own future according to one's preferences. (This might include changing one's preferences, if one prefers to do that when finding certain types of new evidence.) This is the only type of "free will" I've ever thought or heard of that's worth wanting (see Dennett's book of the same name).

If we assume that I know about HA or HB, my choice of FA or FB is self-determination. If HA and HB is the person I'm dealing with having stolen money in the past, and FA and FB are me choosing to do business with them or not, I want my beliefs about how to treat people to be the determining cause of my actions.

I'd say that I do have control of the future, because my brain, and specifically the parts that implement my beliefs about ethics and game theory, is what links the past HA to the future FA, just as I prefer to see such states linked.

I wouldn't say this is necessarily a compatibilist position; it's more of a position of "Are you sure you know what you mean by free will? You say it like it's something worth wanting, but I can't see how it would be if it's not compatible with determinism".

LIke most philosophical questions, it boils down to defining the question. If you say exactly what you mean by free will, you'll have your answer.

Or at least an approximate answer, with details to be filled in by empirical observations. I actually disagree with Dennett that we have "all of the free will worth wanting". I think our cognitive biases prevent us from acting based on our beliefs an awful lot of the time. I'd say we have something like 50% of the self-determination worth wanting.

Ape in the coat

May 05, 2023


No, I don't think it bothers me and I'm not sure why it should. 

When I'm making a choice CA I indeed reveal that I'm in a universe where I'm choosing CA, and HA that lead to this, had happened. 

If I lived in a universe with an omniscient God who knew my every choice, then when I make a choice, I determine the knowledge of such God.

Maybe I'm missing something. Could you explain why it bothers you?

From the responses I'm getting, I think I failed to communicate anything that doesn't quickly boil down to the usual crux(es) between compatibilists and incompatibilists. But to try to answer your question: 

I think 'free will' in its usual sense requires some capacity to influence the future via choice-making. I thought that one of the standard compatibilist positions was that we do influence the future via our choices; both may be fully determined by initial conditions and physical laws, but when the chain of causation between past state X and future... (read more)

There is no substantial difference between controlling the future and controlling the past, just fewer opportunities for controlling the past, since that requires predictors of your decisions located in the past. If they are not already there, you can't place them there from the future without controlling the past.
2Ape in the coat1y
Why not? For me it seems exactly enough.  When we have a model of only logical connections between events, where HA, CA and FA are connected we can't distinguish between affecting the past and affecting the future. But if we add the knowledge of the direction of causality from HA to CA to FA, then we immediately can. Now it's clear that it's HA influencing CA influencing FA and not vice versa.  Of course, in your mind, you can still feel as if you choose your past to be HA while making choice CA. But this is a map-territory confusion. Making the choice CA reveals to you that you have the past HA, but the past HA has already been there whether you know about it or not. Notice, that the same can't be said about the future FA, because it's not there yet.
What's the direction of causality? If there is a single inevitable future, then the future is symmetric to the past.
1Ape in the coat1y
If there weren't a one directional vector of causality then it would be symmetrical. But as there is - it's not.
All microphysical laws are time symmetric. You are (mis)taking a macroscopic asymmetry in time for a fundamental asymmetry in causality.
Thanks for explaining. I don't have an answer to 'why not?' that will satisfy you; ultimately it'll just be another way of saying that compatibilist free will doesn't match the concept of free will that I have and that I think people tend to have pre-theoretically. (And it's different enough that I see it as a redefinition rather than a refinement.)
2Ape in the coat1y
Hmm. It seems to be fitting the requirement you previously wrote. But I understand that this may not be enough for some people, even though I struggle to understand libertarian free will as a coherent concept.  Could you explain then, why would branching feel enough for you? Is it because, if there are branches of the future, that you can select between, while there are no branches of the past that you can select between, it means there is an important difference between the way you interact with the future and with the past while making a choice?
Yes -- I didn't mean to deny that a causal link between my choice and future events counts as 'some capacity to influence the future via choice-making'. But I also didn't mean to suggest that it was a sufficient condition for (my concept of) free will, in cases where the choice is fully determined. To me, free will means something like 'ability to choose between different possible futures'. And if there's no forward-in-time branching, there's only one possible future. (I admit that 'branching' is very under-defined here, and so is 'possible' -- but I think you get what I'm gesturing at, even if you doubt that it could ever be fully specified in a coherent way without ending up as plain old randomness.) Whereas backward-in-time branching seems like it would cash out as 'different possible pasts lead to the same future', or in other words, some information loss as time progresses. So I wouldn't say that free will requires forward in time branching and the absence of backward-in-time branching. 
2Ape in the coat1y
Thank you for your answer. Do you feel that without possible futures it's not actually a choice? Like, imagine a piece of rock. There are can be events E1, E2, E3 that happen to it at different moments of time. Being part of the causal universe, the rock partually causes these events. But it doesn't choose anything. Is it similar to how you feel abut human choice under determenism? As an intuition pump, imagine also a rock in a non-deterministic universe where either E3 or E3' happens after E2. And also imagine  a human in a deterministic one. Would the indeterministic rock be more free than deterministic one? Would it be more free than a human in a deterministic universe? WHere does this extra freedom comes from? There is this intuitive vague feeling that freedom of will has to do something with possibility of alternatives. People feel it, but do not have an actual model of how this all work together. And the thing is, this intuition is true. Just, as it happens, not the way people initially think it is. Here is a neat compatibilist model, according to which you (and not a rock) have an ability to select between different outcomes in a deterministic universe and which explicitly specify what 'possible' mean: possibility is in the mind and so is the branching of futures. When you are executing your decision making algorithm you mark some outcomes as 'possible' and backpropagate from them to the current choice you are making. Thus, your mental map of the reality has branches of possible futures between which you are choosing. By design, the algorithm doesn't allow you to choose an outcome you deem impossible. If you already know for certain what you will choose, than you've already chosen. So the initial intuition is kind of true. You do need 'possible futures' to exist so that you can have free will: perform your decision making ability which separates you from the rock. But the possibility, and branching futures do not need to exist separately of you. They can
Not physically, but platonic objects that serve as semantics for formal syntax make sense, and only syntax straightforwardly exists in the mind, not semantics it admits. So these are the parts of decision making that exist outside of your mind, in the same sense as mathematical objects exist outside of a mathematician's mind.
1Ape in the coat1y
Good point. I'm equalizing between logical existence and existence in one's mind in this post, but if we don't do that then indeed we can say that possible futures exist platonically just as mathematical objects.
But then territory is in the mind? The distinction is mind's blindness to most of the details of the platonic objects it reasons about, thus they are separate existence only partially observed.
I should clarify that I'm not arguing for libertarianism here, just trying to understand the appeal of (and sometimes arguing against) compatibilism.  (FWIW, I don't think libertarian free will is definitely incoherent or impossible, and combined with my incompatibilism that makes me in practice a libertarian-by-default: if I'm free to choose which stance to take, libertarianism is the correct one. Not that that helps much in resolving any of the difficult downstream questions, e.g. about when and to what extent people are morally responsible for their choices.) I'm sorry to give a repetitive response to a thoughtful comment, but my reaction to this is the predictable one: I don't think I'm failing to understand you, but what you're describing as free will is what I would describe as the illusion of free will.  Aside from the semantic question, I suspect a crux is that you are confident that libertarian free will is 'not even wrong', i.e. almost meaninglessly vague in its original form and incoherent if specified more precisely? So the only way to rescue the concept is to define free will in such a way that we only need to explain why we feel like we have the thing we vaguely gesture at when we talk about libertarian free will. If so, I disagree: I admit that I don't have a good model of libertarian free will, but I haven't seen sufficient reason to completely rule it out. So I prefer to keep the phrase 'free will' for something that fits with my (and I think many other people's) instinctive libertarianism, rather than repurpose it for something else.
Why do you think LFW is real? The only naturalistic frameworks that I've seen that support LFW are the ones that are like Penrose's Orch-OR, that postulate that 'decisions' are quantum (any process that is caused by the collapse of the quantum states of the brain). But it seems unlikely that the brain behaves as a coherent quantum state. If the brain is classical, decisions are macroscopic and they are determined, even in Copenhagen. And in this sense, what you have is some inherent randomness within the decision-making algorithms of the brain, there's no special capability of the self to 'freely' choose while at the same time not being determined by their circumstances, there's just a truly-random factor in the decision-making process.
I'm not saying it's real -- just that I'm not convinced it's incoherent or impossible. This might get me thrown into LW jail for posting under the influence of mysterianism, but:  I'm not convinced that there can't be a third option alongside ordinary physical determinism and mere randomness. There's a gaping hole in our (otherwise amazingly successful and seemingly on the way to being comprehensive) physical picture of reality: what the heck is subjective experience? From the objective, physical perspective there's no reason anything should be accompanied by feelings; but each of us knows from direct experience that at least some things are. To me, the Hard Problem is real but probably completely intractable. Likewise, there are some metaphysical questions that I think are irresolvably mysterious -- Why is there anything? Why this in particular? -- and they point to the fact that our existing concepts, and I suspect our brains, are inadequate to the full description or explanation of reality. This is of course not a good excuse for an anything-goes embrace of baseless speculation or wishful thinking; but the link between free will and consciousness, combined with the baffling mystery of consciousness (in the qualia sense), leaves me open to the possibility that free will is something weird and different from anything we currently understand and maybe even inexplicable.
2Ape in the coat1y
  The major appeal of compatibilism for me is that there is an actual model, describing how freedom of will works, how it depends on the notion of possibility, allows to distinguish between entities that have free will and entities who do not and how it corresponds to the layman intuitions and usage of the term and adds up to normality while solving practical matters such as the questions of personal responsibility. I've yet to see anything with similar level of clarity from any other perspective on the matter. I don't think that the explanation I've given you can be said to be just about the feeling of free will. It's part of it. But also it explains the actual decision making algorith, corresponding to these feelings. This algorith is executing in reality. And having this algorithm being executed on your brain gives new abilities compared to not having one (back to a person and a rock example). Neither this algorithm is just about your beliefs. At this moment calling it "an illusion" seems very semantically weird to me. Especially when there isn't a propper model of what non-illusion supposed to be.  Could you help me understand why your choice of definitions is like that? Do you call it illusion because the outcomes you deem possible are not meta-possible: only one will be the output of your decision making algorithm and so only one can really happen? But isn't it the same with indeterminism? Or is it because the possible futures in your mind do not correspond to something outside of it?
I agree that your model is clearer and probably more useful than any libertarian model I'm aware of (with the possible exception, when it comes to clarity, of some simple models that are technically libertarian but not very interesting). Something like that. The SEP says "For most newcomers to the problem of free will, it will seem obvious that an action is up to an agent only if she had the freedom to do otherwise.", and basically I a) have not let go of that naive conception of free will, and b) reject the analyses of 'freedom to do otherwise' that are consistent with complete physical determinism.  I know it seems like the alternatives are worse; I remember getting excited about reading a bunch of Serious Philosophy about free will, only to find that the libertarian models that weren't completely mysterious were all like 'mostly determinism, but maybe some randomness happens inside the brain at a crucial moment, and then everything downstream of that counts as free will for some reason'. But basically I think there's enough of a crack in our understanding of the world to allow for the possibility that either a) a brilliant theory of libertarian free will will emerge and receive some support from, or at least remain consistent with, developments in physics; or b) libertarian free will is real but just inherently baffling, like consciousness (qualia) or some of the impossible ontological questions.
2Ape in the coat1y
But what's the difference between determinist and indeterminist universes here? In any case we have a decision making algorithm. In any case there will be only one actual output of it. The only difference I see is something that can be called "unpredictability in principle" or "desicion instability". If we run the exact same decision making algorithm again in the exact same context multiple times, in determenist universe we get the exact same output every time, while in indeterminist universe the outputs will differ. So it leads us to this completely unsatisfying perspective: Notice also, that even if it's impossible to actually run the same decision making algorithm in the same context from inside this determinist universe, this will still not be satisfying for your intuition. Because what if someone outside of the universe is recreating a whole simulation of our universe in exact details and thus completely able to predict my desicions? It doesn't even matter if these beings outside of the universe with their simulation exist. It's just the principle of things. And the thing is, the intition of requiring "desicion instability" isn't that obvious for the newcomer to the problem of free will. It's a specific and weird bullet to swallow. How do people arrive to this? I suspect that it goes something like that: When we imagine multiple exact replications of our decision making algorithm always comming to the same conclusion, it feels that we are not free to come to the other conclusion, thus our desicion making isn't free in the first place. I think this is a very subtle goalpost shift. Originally we do not demand from the concept of freedom of will the ability to retroactively change our desicions. When you make a choice five minutes ago, you do not claim to not have free will unless you can timetravel back and make a different choice. We can not change the choice we've already made. But it doesn't mean that this choice wasn't free.  The situation with recreating
Note that there is no fact that decision-making actually is an algorithm: that's just an assumption rationalists favour. Note that everyone subjectively experiences an amount of "decision instability" -- you might be unable to make a decision , or immediately regret a decision. So the territory is much more in favour of decision instability than your favoured map. Some libertarians already have mechanistic (up to indeterminism) theories, eg. Robert Kane.
ie., it doesn't. Compatibilism has to manage expectations. Libertarians can say that free agency is the execution of an algorithm, too. It's just that it would be an indeterministic algorithm. (Incidentally, no one has put forward any reason that any algorithm should feel like anything). No. An indeterministic coin-flip has two really possible outcomes.
1Ape in the coat1y
Libertarian algorithmic explanation have to be quite different from compatibilist one. At least, it needs to account for the source of connection between possible futures in your mind and 'real' possible futures, the nature of this 'realness', has its own different way to reduce 'couldness' and 'possibility' to being, has a model of what happens to all the alternative future branches, how previously undeterminable events become determinated by actually happening in the present and how combination of determinable and indeterminable events produce free will. If you think these are answered questions, please make a separate post about it. Not really relevant, but here is a reason for you. Feeling X is having a representation of X in your model of self. Some things are encoded to have representation in it and some are not, depending on whether this information is deemed important for central planning agent by evolution. Global desicion making is extremely important and maybe even the reason why central planning agent exists in the first place, so the steps of this algorithm are encoded in the model of the self. Call them real as much as you want,  it's still either head or tails, when you actually flipped the coin, not both.  Sigh. We've had multiple opportunities to discuss these issues before and sadly you haven't manage to explain anything about libertarianism to my satisfaction and kept talking pass me. Not sure whether it's more of your fault or mine but In any case I'd like to discuss these questions with someone who I have more hope of understanding my position and explaining theirs. So this is my last reply to you in this thread. I repeat my request to write your own post on the matter if you think you have something to say. Frankly, I find the fact that you write replies in a thread addressed to compatibilists, a bit gauche.
Of course: they have to explain more. Of course, but that's just a special case of accurate map-making, not some completely unique problem. Determinism is a special case of indeterminism. Indeterminism is tautologically equivalent to real possibilities. Since determinism is a special case, it is more in need of defense than the special case. I explained that in my PM of 1st July 2022, which you never replied to. No libertarian makes the claim that undeterminable events become determined. Undetermined future events eventually happen..which does not make them causally determinist in retrospect. (Once they have happened, we can determine their values, but that is a different sense of "determine"). I have already explained that in my July 1st reply, quoting previous explanations I had already given. No one has put forward.a reason why having a representation of X should feel like anything. You are saying what...? That there cannot have been two possibilities, because there is only one actuality? But that there can be is the whole point of the word "possibility", even for in-the-mind possibilities. You ignored my long message of July 1st. It's not that I am not trying to communicate.


May 06, 2023


It seems to me that your confusion is contending there are two past/present states (HA+A / HB+B) when in fact reality is simply H -> S -> C. There is one history, one state, and one choice that you will end up making. The idea that there is a HA and HB and so on is wrong, since that history H has already happened and produced state S.

Further, C is simply the output of your decision algorithm, which result we don't know until the algorithm is run. Your choice could perhaps be said to reveal something previously not known about H and S, but it doesn't distinguish between two histories or states, only your state of information about the single history/state that already existed. (It also doesn't determine anything about H and S that isn't "this decision algorithm outputs C under condition S".)

Indeed, even presenting it as if there is actually a CA and CB from which you will choose is itself inaccurate: you're already going to choose whatever you're going to choose, and that output is already determined even if you have yet to run the algorithm that will let you find out what that choice is. The future states CA and CB never actually exist either -- they are simulations you create in your mind as part of the decision algorithm.

Or to put it another way, since the future state C is a complex mix of your choice and other events taking place in the world, it will not actually match whatever simulated option you thought about. So the entire A/B disjunction throughout is about distinctions that only exist in your mental map, not in the territory outside your head.

So, the real world is H->S->C, and in your mind, you consider simulated or hypothetical A's and B's. Your decision process resolves which of A and B you feel accurately reflects H/S/C, but cannot affect anything but C. (And even there, the output was already determinable-in-principle before you started -- you just didn't know what the output was going to be.)

It seems to me that your confusion is contending there are two past/present states (HA+A / HB+B) when in fact reality is simply H -> S -> C. There is one history, one state, and one choice that you will end up making. The idea that there is a HA and HB and so on is wrong, since that history H has already happened and produced state S.

I guess I invited this interpretation with the phrasing "there are two relevantly-different states of the world I could be in". But what I meant could be rephrased as "either the propositions 'HA happened, A is the curre... (read more)

Well, it makes the confusion more obvious, because now it's clearer that HA/A and HB/B are complete balderdash. This will be apparent if you try to unpack exactly what the difference between them is, other than your choice. (Specifically, the algorithm used to compute your choice.) Let's say I give you a read-only SD card containing some data. You will insert this card into a device that will run some algorithm and output "A" or "B". The data on the card will not change as a result of the device's output, nor will the device's output retroactively cause different data to have been entered on the card! All that will be revealed is the device's interpretation of that data. To the extent there is any uncertainty about the entire process, it's simply that the device is a black box - we don't know what algorithm it uses to make the decision. So, tl;dr: the choice you make does not reveal anything about the state or history of the world (SD card), except for the part that is your decision algorithm's implementation. If we draw a box around "the parts of your brain that are involved in this decision", then you could say that the output choice tells you something about the state and history of those parts of your brain. But even there, there's no backward causality -- it's again simply resolving your uncertainty about the box, not doing anything to the actual contents, except to the extent that running the decision procedure makes changes to the device's state. As other people have mentioned, rationalists don't typically think in those terms. There isn't actually any difference between those two ideas, and there's really nothing to "defend". As with a myriad other philosophical questions, the question itself is just map-territory confusion or a problem with word definitions. Human brains have lots of places where it's easy to slip on logical levels and end up with things that feel like questions or paradoxes when in fact what's going on is really simple once you put bac
This is hard to respond to, in part because I don't recognise my views in your descriptions of them, and most of what you wrote doesn't have a very obvious-to-me connection to what I wrote. I suspect you'll take this as further evidence of my confusion, but I think you must have misunderstood me. No I'm not. But I don't know how to clarify this, because I don't understand why you think I am. I do think we can narrow down a 'moment of decision' if we want to, meaning e.g. the point in time where the agent becomes conscious of which action they will take, or when something that looks to us like a point of no return is reached. But obviously the decision process is a process, and I don't get why you think I don't understand or have failed to account for this. I'm fully aware of that; as far as I know it's an accurate description of every version of compatibilism, not just 'LW compatibilism'. How is 'revealing something about the past' retrocausal? There is a difference: the meaning of the words 'free will', or in other words the content of the concept 'free will'. From one angle it's pure semantics, sure -- but it's not completely boring and pointless, because we're not in a situation where we all have the exact same set of concepts and are just arguing about which labels to apply to them. This and other passages make me think you're still interpreting me as saying that the two possible choices 'exist' in reality somewhere, as something other than ideas in brains. But I'm not. They exist in a) my description of two versions of reality that hypothetically (and mutually exclusively) could exist, and b) the thoughts of the chooser, to whom they feel like open possibilities until the choice process is complete. At the beginning of my scenario description I stipulated determinism, so what else could I mean? Even with the context of the rest of your comment, I don't understand what you mean by 'HA/A and HB/B are complete balderdash'. If there's something incoherent or
Direct quotes: And the footnote: This is only trivially true in the sense of saying "whatever I can do to arrive at McDonalds, I can do to make my world the one where I walked in the direction of McDonalds". This is ordinary reality and nothing to be "bothered" by -- which obviates the original question's apparent presupposition that something weird is going on. It's fine so long as HA/A and HB/B are understood to be the events and states during the actual decision-making process, and not referencing anything before that point, i.e.: * H -> S -> (HA ->A) -> CA -> FA * H -> S -> (HB ->B) -> CB -> FB Think of H as events happening in the world, then written onto a read-only SD card labeled "S". At this moment, the contents of S are already fixed. S is then fed into a device which will then operate upon the data and reveal its interpretation of the data by outputting the text "A" or "B". The history of events occurring inside the device will be different according to whatever the content of the SD card was, but the content of the card isn't "revealed" or "chosen" or "controlled" by this process. It isn't; but neither is it actually revealing anything about the past that couldn't have been ascertained prior to executing the decision procedure or in parallel with it. The decision procedure can only "reveal" the process and results of the decision procedure itself, since that process and result were not present in the history and state of the world before the procedure began. Here is the relevant text from your original post: These definitions clearly state "I will choose" -- i.e., the decision process has not yet begun. But if the decision process hasn't yet begun, then there is only one world-state, and thus it is meaningless to give that single state two names (HA/A and HB/B). Before you choose, you can literally examine any aspect of the current world-state that you like and confirm it to your heart's content. You already know which events have happened and


May 06, 2023


Let's look at the mechanism closer:

"My future is FA, because my current state is A." This is standard causality: A causes FA by a sequence of steps that follow the laws of physics.

"My history was HA, because my current state is A." This is anthropic reasoning: technically, it was HA causing A by a sequence of steps, but if we ask "given that I am currently A, how does this limit my possible histories?" the answer might be that only such history is HA.

These two are not the same, but an exact explanation would require explaining exactly what is the difference between the past and the future, how the arrow of time works, etc., which I am not really sure myself how it works, and would probably involve making statements about quantum physics and other complicated things.

It might also work differently in different universes. For example, imagine a deterministic universe of the Game of Life, assuming that it can contain intelligent beings similar to us. For a current state A, there is only one future FA. But there could have been multiple different histories HA that resulted in A. (Or perhaps there was no such history, and the universe was created just now.)

The short version is that for practical purposes, the future and the past, causality and anthropic reasoning, seem to work differently.

the gears to ascension

May 05, 2023


Nobody, including me, can know for sure what the choice is until I make it, and the choice depends on chaos. Even if it's technically deterministic, it depends on how I resolve the noise that is emitted from chaos. If there's true randomness in the world then that additionally helps me be the origin of the choice, rather than deterministic noise, but even with only noise from chaos rather than randomness, the rest of the universe cannot possibly know my choice until I stabilize on it because sensitive dependence on initial conditions means that the details that determine how my brain will wiggle around through neural consensus space are unobservable to any other system no matter how superintelligent, and the choice gets to depend on input from my entire brain. In this sense, my brain is still the causal bottleneck through which my choices depend, and my entire brain is the bottleneck; noise from chaos means that if I might have chosen a way that mismatches my full network of preferences, my neurons get a chance to discuss it before settling. Biases and shortcut reasoning bypass this partially, of course.

As a result, even if technically my choice is strictly a logical consequence of my brain state, that logical consequence is not written to the universe until I resolve it, and the chaos means that every physical system besides my brain must retain logical uncertainty about my choice until it is resolved which way my neurons discuss and settle. In a fully deterministic universe, free will is logical hyperstition.

Some interesting resources on the topic. I have watched the videos, but I only skimmed the search results. Bulleted summaries via kagi.com's universal summarizer in 'key moments' mode.

  • The critical brain hypothesis suggests that the brain operates near a critical point, similar to a second order phase transition.
  • Second order phase transitions are characterized by a continuous change in properties, rather than a sudden jump.
  • The Icing model is a simple system that undergoes a second order phase transition and exhibits scale-free behavior.
  • Neuronal avalanches, or cascades of activity in networks of neurons, also exhibit scale-free behavior and are thought to be neural analogues to the Icing model.
  • The balance between excitation and inhibition is a key factor in determining whether a neural network operates in a subcritical, critical, or supercritical state.
  • The branch ratio, or the average number of neurons activated by a single upstream neuron, is a control parameter that governs the transition from decaying to amplifying activity in neural networks.
  • The critical point is where the balance between excitation and inhibition is optimal, allowing for efficient information processing in the brain.
  • Some links related to this summary on metaphor.systems - the ones I opened and skimmed:

If it's new to you, I'd also suggest an overview of chaos theory:

Or if 10 minutes is a bit long, here's a 1 minute animation showing divergence among chaotic trajectories that start out coherent; there's a moment at :26 where the pendulums lose sync, briefly all at the same edge of stability; however, this is not a chaotic system which seeks the edge of stability, and the pendulums quickly fall in different directions off the saddle point. in contrast a system at the edge of chaos is on a saddle point at almost all times!


May 05, 2023


What compatibilists standardly mean by a free choice is a choice that is not forced or hindered. Neither of your choices is clearly free in that sense.

Which seems to give me just as much control[4] over the past as I have over the future.

Ok, but that could be zero., in both cases. Controlling the future, in the sense of being able to steer towards different possible futures, is specifically whats missing from compatibilist as opposed to libertarian free will.

I think what most people are trying to point at when they talk about free will is something along the lines of ‘ability to do otherwise’ in the sense that, when looking at a choice in retrospect, we would say a person ‘had the ability to do otherwise’ than they actually did.

Compatibilism is only able to account for CHDO in the weak sense that you weren't being forced to do one particular thing by another agent. Nonetheless, only one decision was ever possible, given determinism.

What compatibilists standardly mean by a free choice is a choice that is not forced or hindered. Neither of your choices is clearly free in that sense.

To clarify: I meant to refer to a choice that is free from the kinds of hindrance or coercive influence that would render it 'unfree' in the compatibilist sense.

Ok, but that could be zero., in both cases. Controlling the future, in the sense of being able to steer towards different possible futures, is specifically whats missing from compatibilist as opposed to libertarian free will.

Are you a compatibilist y... (read more)

I'm not a compatibilist , and I reject compatibilism because it can't explain that kind of issue. Theres a standard and often repeated response made by the compatibilists here, that along the lines of "the future depends on your decisions because it won't happen without you making the decision ". Under determinism, events still need to be caused,and your (determined) actions can be part of the cause of a future state that is itself determined, that has probability 1.0. Determinism allows you to cause the future ,but it doesn't allow you to control the future in any sense other than causing it. It allows, in a purely theoretical sense "if I had made choice b instead of choice a, then future B would have happened instead of future A" ... but without the ability to have actually chosen b.
What is missing here is a definition of 'people' to determine how we are effective causes of anything. When you adopt a compatibilist view, you are already implicitly accepting a deflationary view of free will. There's no interesting sense in which people cause things to happen 'fundamentally' (non-arbitrarily, it's a matter of setting boundaries), the idea of compatibilism is just to lay down a foundation for moral responsibility. They are talking past each other in a way. It becomes a discussion on semantics. The different deflationary conceptions of free will are mostly just trying to repurpose the expression 'free will' to fit it for the needs of our society and our 'naive' understanding of people's behavior. Sure, our predispositions bias the distribution of possible actions that we're gonna take such that, counterfactually, if we had different predispositions, we would have acted differently. That's all there is to it. Another different thing is what is a mechanistic explanation of choice-making in our brains, but compatibilism is largely agnostic to that.
9 comments, sorted by Click to highlight new comments since: Today at 1:12 PM

LW is a known hotbed of compatibilism, so here's my question:

That's not been my impression. I would have summarized it more as "LW (a) agrees that LFW doesn't exist and (b) understands that debating compatibilism doesn't make sense because it's just a matter of definition"

Personally, I certainly don't consider myself a compatibilist (though this is really just a matter of preference since there are no factual disagreements). My brief answer to "does free will exist" is "no". The longer answer is the within-physics stick figure drawing.

Perhaps what's really going on to give me that impression is:

  • LW is confident that libertarian free will is incoherent or at least non-existent
  • so when people here talk (explicitly or implicitly) about exercising free will, they usually mean it in the compatibilist sense, and often treat that as the only possible sense

Which doesn't actually imply that a high proportion would identify themselves as compatibilists. (I thought there would be survey results to clear this up, but all I could find with a quick search was a very old one with only hard-to-decode raw data accessible.)

I am not a compatibilist, so not my answer, but Sean Carroll says, in his usual fashion, that free will is an emergent phenomenon, akin to Dennett's intentional stance. This AMA has an in-depth discussion https://www.preposterousuniverse.com/podcast/2021/05/13/ama-may-2021/. I bolded his definition at the very end.

whether you’re a compatibilist or an in compatibilist has nothing at all to do with whether the laws of physics are deterministic. I cannot possibly emphasize this enough. What matters is that there are laws. Whether those laws are deterministic or not makes zero difference to whether you’re a compatibilist or an in compatibilist. You both believe that there are laws, okay.

Whether you believe the fundamental laws are a pilot wave theory of quantum mechanics or a spontaneous collapse theory of quantum mechanics, one of which is deterministic and one of which is not, who cares? That doesn’t affect whether you’re a compatibilist or not, so don’t label yourself a hard determinist, that is not the point. You would still be anti-free will if you’re an incompatibilist, even if the GRW theory of quantum mechanics or Penrose’s theory turns out to be correct, even if determinism is not there. It’s the fact that there are laws that matters.

The compatibilist, what are you compatible about? You’re saying that the belief in free will, that describing human beings as agents that make decisions, that make choices, is compatible with human beings being made of either neurons or elementary particles or whatever you want that obey the laws of physics. A compatibilist says you can describe the world in different ways that are compatible with each other, even though they use very different vocabularies.

One way of describing the world is sort of at the microscopic level where you’re made of a bunch of particles. They’re obeying the laws of physics, whatever those laws, are deterministic or indeterministic, and there’s no free will there. Free will does not enter the Lagrangian for the standard model of particle physics, okay. No one thinks it does. And then there’s another level, there’s a biological level, and then there’s finally a human level where you have people, okay, and the compatibilist says, people make choices, this doesn’t seem like a very controversial thing to say, but apparently it is.

So here’s one way of thinking about it. Alice and Bob are in a car. Alice is driving, Bob is navigating with his Google Maps and they’re looking for a restaurant, and Bob says, oh, turn left up here at this intersection, the restaurant will be right there once we turn left. Alice turns left and there’s no restaurant there. And Alice says, what’s going on? You told me to turn left. I turned left because you told me to turn left. And Bob says, yeah, no, I knew the restaurant wasn’t there, but the laws of physics said that that would be what I would say, and that’s what you would do, so I’m not really to blame for anything that happened.

Nobody in their right mind talks that way. Everyone in the world who is not crazy says Alice made a choice to turn left, why? Because Bob told her to turn left and she trusted that Bob was going to give her the correct information, right? Bob made a choice to tell Alice something, why? Well, I don’t know, there’s something perverse in Bob’s mind that made him play a little game or something like that. Literally nobody refuses to talk that way in the real world. Now, there are people who pretend to not talk that way, they will say, oh no, Alice and Bob didn’t make any choices, but when it comes right down to it, these people are constantly trying to convince you to choose to not believe in free will.

So you can’t act that way, you can’t live in the world, because it’s not a good description of the world at the human level to act as if human beings don’t make choices. The reason why compatibilists think that it’s sensible to talk about human beings as agents that make choices is because that’s the best theory we have of people, and it literally is everyone’s theory. There’s no one who doesn’t have that theory, because it works, it’s true. And I did the podcast with Sam Harris a long time ago, so to be, that’s a little frustration that comes out from me, and I will vent my frustration here, and it’s not just Sam Harris, it’s many other people.

I don’t think I have ever met an incompatibilist who could correctly describe to me what compatibilists think. There’s… Only straw compatibilists live in the mind of incompatibilists, and the incompatibilists seem to think that if they really… If you really believe in the laws of physics, they can construct a logical cage to get you to admit that we are made of particles that obey the laws of physics. But I admit that, and the discussion with Sam was incredibly frustrating, ’cause that’s what he was trying to do, he was trying to say like, okay, if I knew all of the laws in all of the particles and I was Laplace’s demon, and he’s pushing against an open door. I admit all that.

If you describe the universe as microscopically in the laws of physics, then it obeys the laws of physics, and there’s no free will there, that’s just not the point from the point of view of a compatibilist. And this is why it’s very important to realize that Laplace’s demon doesn’t exist, and none of us is anywhere close to Laplace’s demon. Now, there are interesting questions to talk about that are not that question. The interesting questions are, and this gets into some of the questions here, what about the edge cases where it becomes less and less useful to describe people as agents making choices based on good reasons? Like what if you are a drug addict or you have some brain damage or something like that, and you’re just… You’re under a compulsion that forces you to do something.

And then I would say, indeed, it becomes less and less useful to describe that person as an agent making rational choices. And we do… We don’t describe those people as robustly as agents making rational choices, so that discussion, the practical level discussion about how to treat people who suffer in different ways from an inability to be a completely rational agent, which we all do, none of us are completely rational, there are degrees of it, so how do you deal with that in the real world? That’s an interesting discussion to have.

But this whether or not to label it free will discussion is to me the most boring thing in the world, because there aren’t people who don’t talk about other people as choice makers in my mind. And if you are someone who believes in the laws of physics deep down, but you say, but I will, of course, in my everyday life, I will talk about people making choices, then there’s a word for what you are, it’s called compatibilism. That’s what you are.


whether you’re a compatibilist or an in compatibilist has nothing at all to do with whether the laws of physics are deterministic.

Yes. It's a conceptual issue to do with what "free will" means ... and a physicist would have no special insight into that.

You’re saying that the belief in free will, that describing human beings as agents that make decisions, that make choices,

"Making choices" is setting the bar very low indeed. I don't think Carrol undestands libertarians too well.

There are a number of main concerns about free will:

  1. Concerns about conscious volition, whether your actions are decided consciously or unconsciously.

  2. Concerns about moral responsibility, punishment and reward.

  3. Concerns about "elbow room", the ability to "have done other wise", regret about the past, whether and in what sense it is possible to change the future.

And this is why it’s very important to realize that Laplace’s demon doesn’t exist, and none of us is anywhere close to Laplace’s demon.

Which has no bearing at all on the existence of determinism, or free will.

Determinism also needs to be distinguished from predictability. A universe that unfolds deterministically is a universe that can be predicted by an omniscient being which can both capture a snapshot of all the causally relevant events, and have a perfect knowledge of the laws of physics.

The existence of such a predictor, known as a Laplace's demon is not a prerequisite for the actual existence of determinism, it is just a way of explaining the concept. It is not contradictory to assert that the universe is deterministic but unpredeictable.

If you are unable to make predictions in a deterministic universe, it is still deterministic, and you still lack the ability to have done otherwise in the libertarian sense, so the existence of free will still depends on whether that is conceptually important, which can't be determined by predictability. Predictability does not matter in itself, it matters insofar it relates to determinis m.

Even if you are not compatibilist, there are certainly some non-free choices (maybe by non-humans or whatever is your criteria) and they would exhibit the same problem.

Could you give an example? (I'm not trying to be a smartarse, just trying to make sure I understand the point you're making.)

For example, there is a rubber ball, and the world could be in two states:

State A: Past events HA have happened, current state of the world is A, the ball will fly up, future FA will happen.

State B: Past events HB have happened, current state of the world is B, the ball will fall down, future FB will happen.

When ball moves, it chooses/reveals which of those two states of the world are reality. Which seems to give the ball just as much control over the past as it has over the future.

The confusion is resolved if you realize that both A and B here are mental simulations. When you observe the ball moving, it allows you to discard some of your simulations, but this doesn't affect the past or future, which already were whatever they were.

To view the ball as affecting the past is to confuse the territory (which already was in some definite state) with your map (which was in a state of uncertainty re: the territory).

Thanks. I get what you mean now, and while I instinctively want to respond that it's a bit beside the point, when I think it through it probably does cut to the core of why a compatibilist would be unbothered by this.