After a while, you are effectively learning the real skills in the simulation, whether or not that was the intention.
Why the real skills, rather than whatever is at the intersection of 'feasible' and 'fun/addictive'? Even if the consumer wants realism (or thinks that they do), they are unlikely to be great at distinguishing real realism from fantasy realism.
FWIW, the two main online chess sites forbid the use of engines in correspondence games. But both do allow the use of opening databases.
(https://www.chess.com/terms/correspondence-chess#problems, https://lichess.org/faq#correspondence)
I agree that your model is clearer and probably more useful than any libertarian model I'm aware of (with the possible exception, when it comes to clarity, of some simple models that are technically libertarian but not very interesting).
Do you call it illusion because the outcomes you deem possible are not meta-possible: only one will be the output of your decision making algorithm and so only one can really happen?
Something like that. The SEP says "For most newcomers to the problem of free will, it will seem obvious that an action is up to an agent only if she had the freedom to do otherwise.", and basically I a) have not let go of that naive conception of free will, and b) reject the analyses of 'freedom to do otherwise' that are consistent with complete physical determinism.
I know it seems like the alternatives are worse; I remember getting excited about reading a bunch of Serious Philosophy about free will, only to find that the libertarian models that weren't completely mysterious were all like 'mostly determinism, but maybe some randomness happens inside the brain at a crucial moment, and then everything downstream of that counts as free will for some reason'.
But basically I think there's enough of a crack in our understanding of the world to allow for the possibility that either a) a brilliant theory of libertarian free will will emerge and receive some support from, or at least remain consistent with, developments in physics; or b) libertarian free will is real but just inherently baffling, like consciousness (qualia) or some of the impossible ontological questions.
Why do you think LFW is real?
I'm not saying it's real -- just that I'm not convinced it's incoherent or impossible.
And in this sense, what you have is some inherent randomness within the decision-making algorithms of the brain
This might get me thrown into LW jail for posting under the influence of mysterianism, but:
I'm not convinced that there can't be a third option alongside ordinary physical determinism and mere randomness. There's a gaping hole in our (otherwise amazingly successful and seemingly on the way to being comprehensive) physical picture of reality: what the heck is subjective experience? From the objective, physical perspective there's no reason anything should be accompanied by feelings; but each of us knows from direct experience that at least some things are. To me, the Hard Problem is real but probably completely intractable. Likewise, there are some metaphysical questions that I think are irresolvably mysterious -- Why is there anything? Why this in particular? -- and they point to the fact that our existing concepts, and I suspect our brains, are inadequate to the full description or explanation of reality. This is of course not a good excuse for an anything-goes embrace of baseless speculation or wishful thinking; but the link between free will and consciousness, combined with the baffling mystery of consciousness (in the qualia sense), leaves me open to the possibility that free will is something weird and different from anything we currently understand and maybe even inexplicable.
This is hard to respond to, in part because I don't recognise my views in your descriptions of them, and most of what you wrote doesn't have a very obvious-to-me connection to what I wrote. I suspect you'll take this as further evidence of my confusion, but I think you must have misunderstood me.
The confusion in your original post is that you're not treating "choice" as a process with steps that produce an output, but rather as something mysterious that happens instantaneously while somehow being outside of reality.
No I'm not. But I don't know how to clarify this, because I don't understand why you think I am. I do think we can narrow down a 'moment of decision' if we want to, meaning e.g. the point in time where the agent becomes conscious of which action they will take, or when something that looks to us like a point of no return is reached. But obviously the decision process is a process, and I don't get why you think I don't understand or have failed to account for this.
LW compatibilism isn't believing that choice magically happens outside of spacetime while everything else happens deterministically, but rather including your decision procedure as part of "things happening deterministically".
I'm fully aware of that; as far as I know it's an accurate description of every version of compatibilism, not just 'LW compatibilism'.
retrocausal, in the sense of "revealing" or "choosing" anything about the past
How is 'revealing something about the past' retrocausal?
As other people have mentioned, rationalists don't typically think in those terms. There isn't actually any difference between those two ideas, and there's really nothing to "defend".
There is a difference: the meaning of the words 'free will', or in other words the content of the concept 'free will'. From one angle it's pure semantics, sure -- but it's not completely boring and pointless, because we're not in a situation where we all have the exact same set of concepts and are just arguing about which labels to apply to them.
the only place where hypothetical alternative choices exist is in the decider's brain
This and other passages make me think you're still interpreting me as saying that the two possible choices 'exist' in reality somewhere, as something other than ideas in brains. But I'm not. They exist in a) my description of two versions of reality that hypothetically (and mutually exclusively) could exist, and b) the thoughts of the chooser, to whom they feel like open possibilities until the choice process is complete. At the beginning of my scenario description I stipulated determinism, so what else could I mean?
Well, it makes the confusion more obvious, because now it's clearer that HA/A and HB/B are complete balderdash.
Even with the context of the rest of your comment, I don't understand what you mean by 'HA/A and HB/B are complete balderdash'. If there's something incoherent or contradictory about "either the propositions 'HA happened, A is the current state, I will choose CA, FA will happen' are all true, or the propositions 'HB happened, B is the current state, I will choose CB, FB will happen' are all true; the ones that aren't all true are all false", can you be specific about what it is? Or if the error is somewhere else in my little hypothetical, can you identify it with direct quotes?
I should clarify that I'm not arguing for libertarianism here, just trying to understand the appeal of (and sometimes arguing against) compatibilism.
(FWIW, I don't think libertarian free will is definitely incoherent or impossible, and combined with my incompatibilism that makes me in practice a libertarian-by-default: if I'm free to choose which stance to take, libertarianism is the correct one. Not that that helps much in resolving any of the difficult downstream questions, e.g. about when and to what extent people are morally responsible for their choices.)
Here is a neat compatibilist model, according to which you (and not a rock) have an ability to select between different outcomes in a deterministic universe and which explicitly specify what 'possible' mean: possibility is in the mind and so is the branching of futures. When you are executing your decision making algorithm you mark some outcomes as 'possible' and backpropagate from them to the current choice you are making. Thus, your mental map of the reality has branches of possible futures between which you are choosing. By design, the algorithm doesn't allow you to choose an outcome you deem impossible. If you already know for certain what you will choose, than you've already chosen. So the initial intuition is kind of true. You do need 'possible futures' to exist so that you can have free will: perform your decision making ability which separates you from the rock. But the possibility, and branching futures do not need to exist separately of you. They can just be part of your mind.
I'm sorry to give a repetitive response to a thoughtful comment, but my reaction to this is the predictable one: I don't think I'm failing to understand you, but what you're describing as free will is what I would describe as the illusion of free will.
Aside from the semantic question, I suspect a crux is that you are confident that libertarian free will is 'not even wrong', i.e. almost meaninglessly vague in its original form and incoherent if specified more precisely? So the only way to rescue the concept is to define free will in such a way that we only need to explain why we feel like we have the thing we vaguely gesture at when we talk about libertarian free will.
If so, I disagree: I admit that I don't have a good model of libertarian free will, but I haven't seen sufficient reason to completely rule it out. So I prefer to keep the phrase 'free will' for something that fits with my (and I think many other people's) instinctive libertarianism, rather than repurpose it for something else.
It seems to me that your confusion is contending there are two past/present states (HA+A / HB+B) when in fact reality is simply H -> S -> C. There is one history, one state, and one choice that you will end up making. The idea that there is a HA and HB and so on is wrong, since that history H has already happened and produced state S.
I guess I invited this interpretation with the phrasing "there are two relevantly-different states of the world I could be in". But what I meant could be rephrased as "either the propositions 'HA happened, A is the current state, I will choose CA, FA will happen' are all true, or the propositions 'HB happened, B is the current state, I will choose CB, FB will happen' are all true; the ones that aren't all true are all false".
I'm not sure how much that rephrasing would change the rest of your answer, so I won't spend too much time trying to engage with it until you tell me, but broadly I'm not sure whether you are defending compatibilism or hard determinism. (From context I was expecting the former, but from the text itself I'm not so sure.)
I don't see a difference between that argument and saying that a jumbo jet doesn't cause anything, only its atoms do.
Happy to leave this here if you've had enough, but if you do want a response I'll need more than that to go on. I've been struggling to understand how your position fits together, and that doesn't really help. (I'm not even sure exactly what you're referring to as 'that argument'. Admittedly I am tired; I'll take a break now.)
Agreed, and that's part of why I see mysterious libertarian free will as not having been ruled out.
I don't think they lack internal coherence; you haven't identified a contradiction in them. But one point of imagining them is to highlight the conceptual distinction between, on the one hand, all of the (in principle) externally observable features or signs of consciousness, and, on the other hand, qualia. The fact that we can imagine these coming completely apart, and that the only 'contradiction' in the idea of zombie world is that it seems weird and unlikely, shows that these are distinct (even if closely related) concepts.
This conceptual distinction is relevant to questions such as whether a purely physical theory could ever 'explain' qualia, and whether the existence of qualia is compatible with a strictly materialist metaphysics. I think that's the angle from which Yudkowsky was approaching it (i.e. he was trying to defend materialism against qualia-based challenges). My reading of the current conversation is that Signer is trying to get Carl to acknowledge the conceptual distinction, while Carl is saying that while he believes the distinction makes sense to some people, it really doesn't to him, and his best explanation for this is that some people have qualia and some don't.