I decided LessWrong would be more impressive to me with a particular, well-defined change. I decided I would be the change I want to see in the world, and see what happens. As a bonus, if someone is already doing this and I just haven't noticed, (due to insufficient diligence, due to constructive laziness,) or I'm wasting both our times for some other reason, I understand there's a downvote button.

 

 

First, I should define the explanandum. I feel like I'm free. I have a direct sensation of basic freedom-ness. Most evidence seems to suggest my actions are pre-determined, which feels entirely different to believe.

Distinguishing all other differently-feeling things is easy - there's always an actual objective (if sometimes illusory) difference that the different sensations correspond to, and causing that property to converge causes the different corresponding sensations to converge.

For example, food prepared by my hated enemy is not something I want to eat. Food cooked by someone I somewhat dislike I prefer to avoid eating. I'm fine with food made by someone I don't know is my hated enemy. These dishes might be atom-for-atom identical, this might be an irrational feeling, but even so, there is an objective property that my feeling is a function of. Similarly, if I mistakenly believe the chef is my hated enemy, simply informing me otherwise will repair my perception of the comestibles. There's experiments you can do to safely predict my reaction.

Based on this long standing, unviolated pattern I concluded that, obviously, there must be some experiment that can predict what causes me to like freedom, and dislike determinism. It may be a rationally irrelevant feature. I may care about it simply because I care about it, pre-rationally.

I cannot find any such experiment. I can find one feature, but it only shows I shouldn't be able to tell the difference.

 

 

Like most children, I was naive. I can prove I have free will by doing something against my self-interest right? Ha ha, silly mini-Alrenous! This only shows that I value proving my freedom more than whatever I'm sacrificing, no matter what I'm sacrificing. I cannot act against my self-interest. (You can argue with me about psychological egoism, but I won't change my mind, 99.5% ± 0.3%, by observation.) Similarly, if you find a decision I apparently cannot make, it doesn't prove I lack free will, it just proves I care more about not doing that thing than about your opinion.

This line of logic generalizes tremendously, so I tried turning the question around. What can't I physically do without free will? This question is very easy in answer in the age of computers, and the answer is nothing. Anything I can do, you can program a computer to copy. Heck, most if not all of it can happen by pure chance. (If you can think of a counter-example, please let me know.)

Perhaps, I asked myself, I can think things - make calculations - that are not possible for a computer? And therefore, while I wouldn't have access to different actions, I would choose better ones.

So, what, I can be illogical? Either I'm concluding what the evidence shows, or I'm not. And, again, no matter how advanced my epistemology, once I come up with it, you can simply extract the rules and teach them to a computer. If I'm concluding something the evidence doesn't show (but is truer) ... well, I'm just not, free will isn't clairvoyance. (Though it amuses me to imagine it. Learn things you can't know, with Free Will™!) Note this conclusion is recursive. If you can copy my epistemology, you can also copy my method for learning about or inventing epistemologies.

 

This conclusion is bolstered by a second line of evidence. What are the consequences of assuming free will vs. determinism? In a stochastic universe, there are no consequences at all. (In a non-random universe, decisions would be obviously acausal, barring certain special conditions.)

For example, naive mini-Alrenous, like most, thought that determinism invalidates the legal system and the idea of responsibility. (Experiments I'm too lazy to reference show that believing in determinism increases asocial behaviour, which I interpret to mean they think they can get away with it or there's no reason not to.) This is true in the sense that it invalidates classical responsibility, however, it immediately replaces it with a bit-for-bit identical consequence. Instead of punishing crime to encourage the decision against further crime, you punish crime to deterministically lower the odds of it happening again. Instead of punishing crime so that criminals see a bad future in crime and decide not to, you punish crime to alter the incentives which cause criminals to perpetrate. (Is it hard for you to tell these descriptions apart, or just me? My perspective, having concluded they're the same, is clouding my judgment.) In both cases, I can transform the 'responsible' party from one who is physically responsible for the undesired outcome, into being the party who needs to be punished if you want to safely expect less crime in the future. Under free will, it is the decision-maker. Under determinism, it's usually the biological entity who instantiated the act - in other words the exact same entity.

(Constructive criticism request: Did I beat that one into the ground? Did I not explain it enough? Did I make the common programmer mistake of minutely describing the obvious and glossing over the difficult bit? Should I have asked for constructive criticism elsewhere?)

 

I feel like doing another example. Free will apparently gives me the option to choose, out of any possible future world I have the power to reach, the one I value most. Determinism will cause me to choose the possible future I most value.

 

There are many further examples, if this isn't enough for you. I didn't stop looking once I'd disproven my hypothesis; for my own use I intentionally beat it into the ground.

 

When informed that it wasn't cooked by my hated enemy, and was not only atom-for-atom identical (or close enough as far as I can measure) to a meal I'd like to eat, but had the same history as a meal I'd like to eat, my perception of the hospitality changed to be bit-for-bit identical to one of a meal I'd like to eat.

When informed that determinism is identical to free will, the needle didn't even quiver. Free will is great and boo to determinism, and screw you if you try to change my mind.

 

 

That's not all.

Free will is still different from determinism. Actually, I was correct, if in a very, very limited sense.

If an electron has free will, it can violate statistics. If, instead of (insert your account of stochasticity) determining whether I will measure it as spin up or spin down, it decides. If it wants, it can pick spin down ten times, a hundred, a thousand - however many in a row it wants. It can, according to my statistics, be arbitrarily unlikely.

This is because my so-called statistics for a free will electron were bunk. I can collate all the decisions it made in the past, add them up and divide [down] by [total], but it doesn't mean anything. A free will electron doesn't have a probability. It is not likely or unlikely to pick either. There is not only no fact of the matter about which it will pick, there's no probabilistic fact of the matter. Again, as demonstrated by the fact that no matter what distribution you measure or derive, the electron has the power to prove you arbitrarily wrong.

This is another reason I earlier needed the disclaimer, 'in a stochastic universe.' In principle, proving that humans have free will in this sense is straightforward, if perhaps somewhat expensive. In practice, humans are a chaotic system embedded in a chaotic environment with true-random inputs and it is impossible to distinguish that from a human with free will. Similarly, you could put humans in an environment where most chaos cancels out, but even a good-faith critic can always say there's too much chaos.

Assume Foundation. Hari Seldon. Psychohistory. I accurately predict the behaviour of large chunks of humans for a reasonable amount of time, and my model isn't just overtuned or something. Then, suddenly, the humans deviate. Did they just make a free will decision, and my statistics were bogus? Did I simply not have enough decimal points to forecast out that far? Or were my statistics simply unable to cope with the accumulated weight of the random walk amplified by the chaotic system?

But let's assume I'm wrong about this conclusion too. Let's say I get all the decimal points. Yes, yes, a universe cannot simulate itself. Actually, it can - it cannot predict itself, but it can retrodict, by simulating one half of itself and then simulating the other half and then combining the simulations. So, I use parts of human technology to retrodict the rest of human existence. If my audited retrodiction, starting from the actual initial conditions, successfully replicates the events, then I have successfully modelled humanity and can safely say I understand it, and it is deterministic.

Only, in a stochastic universe, this is a lot like a cloaked singularity. Technically speaking, in a world idealized to within an inch of its life, I can retrodict humanity. In practice, the combinatorics grow...fast. Exponentially? Hyper exponentially? Whatever the exact order, it is correctly summed up as 'fucking fast.' However much computing power I throw at the problem, it will become negligible in short order. How far can retrodict with a computer on the verge of collapsing into a black hole, before civilization collapses catastrophically enough to destroy the machine? That far, plus a femtosecond further, will take many orders of magnitude longer to compute.

Actually doing even part of this computation is way beyond the possible, even assuming all the decimal places.

 

This time, while free will and determinism aren't actually identical, they're still indistinguishable. The evidence may exist, but I can't gather it.

 

 

Here's a further problem. I know a (pretty abstract) blueprint for a free-will machine. (I estimate you can play with a bank of these for about $100 000, peanuts for serious physics labs.)

The goal is to make statistics meaningless, with the test that, no matter how much data you accumulate, the machine will be able to prove you arbitrarily wrong.

Imagine a spontaneous event. A statue of liberty made of butter appears in a near Earth orbit.

Okay, say scientists. That was pretty unexpected. But, we can say that, if, for some unknown reason, a statue of liberty made of butter (solmb) appears, 100% of the time it will appear in near Earth orbit. We don't have much data so we're not very confident of this conclusion, but it's the best we can do right now. (You see where this is going?)

The next solmb appears in far Earth orbit. Okay, 50% near, 50% far. The next, Alpha Centauri. The next, mediumish orbit. Perhaps next, there's a cluster around a particular planet in Alpha Centauri, scientists stop panicking, start building some distribution...but this is a truly spontaneous event. Solmbs can appear anywhere. No matter what description or distribution you make up, in an infinite universe, the solmbs can and will appear in places infinitely unlikely. (Even if you can't detect them right now, the true distribution changes.) Solmbs keep popping up at random intervals at random places, making a total mockery of both science and the law of conservation of energy.

To do this for real, with no conservation violation, hook up a quantum particle to a measurement device, and hook that in double-recursion to a computation device. The instrument measures the particle's random collapse from superposition, and feeds it in bit form to the computer, which produces some output, which is then used to A) set the probabilities on the various possible quantum outcomes and B) re-program itself, so it computes something different next time.

This machine can random-walk to any output bit stream, (which is necessary - it needs an infinite possibility space because I'm going to divide by it) and because of the double-recursion, the probability of all future states depends on all past states, directly contradicting the fundamental law of probability.

Technically it needs an infinite number of transistors, but you don't actually have to run it until it starts bumping up into physical limits. You can add more transistors, or simply reset the device, whatever you want. If it can run infinitely, the probability of any particular state would be infinitesimal, which is physically equivalent to zero. Doesn't matter, you say? Cloaked singularity? The machine can't know how long it has been computing for, that would be serious nonlocality. It might have reached the current state by traversing every possible state, used infinite transistors, and then dropping most of them. Or you might have just switched it on. It is atom-for-atom the same state, with the same output, bit-for-bit.

If its probability is meaningless at some future time, it must be meaningless right now.

Free will in a can.

Four notes.

All bits of my machine are fully deterministic, at least in the stochastic sense. You can, if you  watched the full run, both calculate the exact probability of this machine being in its current state, and, at the same time, the machine itself can't possibly have a meaningful probability. Just as the solmb has a well-defined historical distribution right now, even though by definition it doesn't have a probability. It has free will, and you can't ever measure it as having it.

It is not only likely, but practically certain that such a machine exists in human brains. The components are all there, the only question is if they're hooked up right, and that will happen by accident if it doesn't on purpose.

I should say and not just imply that the machine can be arbitrarily arbitrary; getting arbitrarily small probabilities in finite time is just a matter of using more states in the true-random bit generator.

I humbly suggest probability is quantized like everything else, or at least has a smallest meaningful delta, so if you insist on having actually-zero probabilities, it's just a matter of 1 / [probability quanta] < rand_measure_states^iterations, and therefore time to probability singularity = iterations*iteration_time, at which point the device will 'wake up.'

 

 

On the evidence available to me, not only can I not distinguish free will and determinism, I can still show that they're different. My feeling that free will is different from determinism is totally justified and totally meaningless. Does this legitimately break the pattern of correlation between sensations and sensed properties? I don't know, can't even begin to guess. If I completely accept the compatibilist position, it immediately reduces to determinism. I can try to say, 'wrong question,' to say I'm somehow misunderstanding that these are properties the world can have, but then I have to explain the appearance of an apparently useless yet instinctive construct, that, apparently, has strong positive adaptive/selective value. Equivalently, how my brain knows it is believing in free will when it is identical to determinism. Equivalently, how it can tell they should imply different reactions. Concluding that determinism and free will might be the same or an impossible adjective like up vs. down green does not in fact address the explanandum, at least not anything like directly.

I can only respond, dear Reality, what the fucking fuck. Could you stop messing with my head? It hurts. Ow.

New Comment
16 comments, sorted by Click to highlight new comments since: Today at 7:44 AM
[-]gjm11y60

I decided LessWrong would be more impressive to me with a particular, well-defined change.

So, um, what was that change?


Your description of the "free-will machine" doesn't make much sense to me. What does it mean to "set the probabilities on the various possible quantum outcomes"? What is this "fundamental law of probability" of which you speak? In what possible sense does the output of your machine "not have a probability"? I think what you're getting at may be: it's a machine that looks at all possible computable probability distributions over its outputs, and arranges by some kind of diagonalization procedure to behave in a way that doesn't fit any of them. But then the fact that this thing isn't (according to the laws of physics as currently understood) actually implementable is a fundamental obstacle to what you're trying to do. Actual things you can make in the actual world produce (so far as we know) behaviour with computable probabilities. Impossible things might not; who cares? Finite truncations of your "free-will machine" simply fail to have the probability-defying behaviour you're ascribing to the infinite version.

For any concept, ask first "what problem am I trying to solve with this concept?"

A free choice is one which we hold a person morally accountable for. It's about apportioning blame or credit. That's the use. All the squawk about causality and determinism is entirely derivative, and usually just confused because people are confused about causality.

Does it follow, then, that if I hold people morally accountable falling to the ground when dropped out of an airplane, and you don't, falling to the ground is both a free choice and not a free choice?

This seems like a bizarre way to talk.

It seems most people would say that you are correct to not hold such people to account, because falling is not a free choice, and I am incorrect to do so, because falling is not a free choice.. that is, that you have causality reversed here.

I agree that the squawk about causality and determinism is confused, though.

Note the construction of your hypothetical. "If I hold someone morally accountable in a ludicrous way, I end up feeling something ludicrous has happened". Yes, you do. I think that's an observation in favor of my general approach, that the moral evaluations come first, and the analytic rationalizations later.

It seems most people would say that you are correct to not hold such people to account, because falling is not a free choice

No, because their moral intuitions tell them not to hold people accountable in such a situation. Part of that intuition in algorithmic terms no doubt includes calculations about locus of control, but it's hardly limited to it. Free will is part of the organic whole of moral evaluation.

(People a little more current on their Korzybski could probably identify the principle involved. Non-elementalism?)

Put in another way, free will is a concept that functions in a moral context, and divorcing it from that context and speaking only in terms of power, control, knowledge, and causality will quickly lead to concepts of free will ill suited for the function they play in morality.

This is a general principle for me. The world does not come labelled. There is no astral tether from labels to the infinite concepts they might refer to. You properly define your concepts by properly identifying the problem you're trying to solve and determining what concepts work to solve them.

This applies well to Jaynes. Axiomatic probability theory defines labels, concepts, and relationships, and then says "maybe these will be useful to you". Maybe they will, and maybe they won't. Jaynes started with a problem of building a robot that could learn and represent it's degree of belief with a number. He started with a problem, so that the concepts he built were relevant to his problem and you knew how they were relevant.

It can, according to my statistics, be arbitrarily unlikely.

It can do that without free will. It's not likely, but it's also not likely with free will. It's a consequence of the fact that, if you add the probability of all choices together, the result is 100%.

It may be hard to model, in which case you might be able to intuitively make predictions that your explicit models cannot. For example, that the probability of 100 successive up spins is more than 2^-100. This is a consequence of the system being complicated. More precisely, it's a consequence of it being complicated to model explicitly, but still somewhat intuitive to humans. This is notably an attribute that applies to humans.

Are there specific patterns you expect it to follow if it has free will? Why?

I've found this post hard to follow. However, I will add that, if you are consistently unable to predict a phenomena, the probability that reductionism is false does not increase faster than the probability that you are in The Matrix.

[-][anonymous]11y-10

I have a direct sensation of basic freedom-ness. Most evidence seems to suggest my actions are pre-determined, which feels entirely different to believe.

Your decisions are free to determine your actions. You are free to make decisions based on whatever considerations you decide to employ.

(See free will on the wiki and related posts and pages.)

[This comment is no longer endorsed by its author]Reply

It's not a useless construct. Entities that think they have free will and think that other entities have free will are better able to cooperate in society where punishment and reward are neccesary mechanisms.

...insofar as such entities make punishment and reward contingent on the belief in free will. If they instead punish what they don't want and reward what they do, independent of their belief in free will (e.g., punish people for entering in a forbidden location even if they were forced there by forces outside their control, like storm winds, or their own brains, or whatever), that works OK too.

It doesn't need to be contingent deductively or whatever as long it changes the way people fee about things for purposes of prosocial enforcement

Surely not having free will and knowing that other entities dont have free will can lead to more cooperation in society. Knowing that others are programmed to cooperate and that they have no free will to decide otherwise can lead to maximum cooperation while actually having free will on the other hand would bring uncertainty and noise.

A credible belief in free will is an approximation that Is more achievable by evolutions so far than perfect self and other knowledge. Similarly: emotions are noisier and less certain than people just saying exactly what they want but we still have them and animals also seem to.

[+]Decius11y-150