For a very long time, philosophy has presented us with two straw men in combat with one another and we are expected to take sides. Both straw men appear to have been proved true and also proved false. The straw men are Determinism and Free Will. I believe that both, in any useful sense, are false. Let me tell a little story.

 

 

Mary's story

 

Mary is walking down the street, just for a walk, without a firm destination. She comes to a T where she must go left or right and she looks down each street finding them about the same. She decides to go left. She feels she has, like a free little birdie, exercised her will without constraint. As she crosses the next intersection she is struck by a car and suffers serious injury. Now she spends much time thinking about how she could have avoided being exactly where she was, when she was. She believes that things have causes and she tries to figure out where a different decision would have given a different outcome and how she could have known to make the alternative decision. 'If only..' ideas crowd into her thoughts. She believes simultaneously that her actions have causes and that there are valid alternatives to her actions. She is using both deterministic logic and free will logic, neither alone leads to 'If only..' scenarios – it takes both. If only she had noticed that the next intersection on the right had traffic lights but on the left didn't. If only she had not noticed the shoe store on the left. What is more she is doing this in order to change some aspect of her decision making so that it will be less likely to put her in hospital, again this is not in keeping with either logic. But really both forms of logic are deeply flawed. What Mary is actually attempting is to do maintenance on her decision making processes so that they can learn whatever is available to be learned from her unfortunate experience.

 

 

What is useless about determinism

 

There is a big difference between being 'in principle' determined and being determined in any useful way. If I accept that all is caused by the laws of physics (and we know these laws – a big if) this does not accomplish much. I still cannot predict events except trivially: in general but not in full detail, in simple not complex situations, extremely shortly into the future rather than longer term, etc. To predict anything really sizable, like for instance, how the earth came to be as it is, or even how little-old-me became what I am, or even why I did a particular thing a moment ago, would take more resources and time than can be found in the life of our universe. Being determined does not mean being predictable. It does not help us to know that our decisions are determined because we still have to actually make the decisions. We cannot just predict what the outcomes of our decisions will be, we really, really have to go through the whole process of making them. We cannot even pretend that decisions are determined until after we have finish making them.

 

 

What is useless about freewill

 

There is a big difference between being free in the legal, political, human rights type of freedom. To be free from particular, named restraints is something we all understand. But the free in 'free will' is a freedom from the cause and effect of the material world. This sort of freedom has to be magical, supernatural, spiritual or the like. That in itself is not a problem for a belief system. It is the idea that something that is not material can act on the material world that is problematic. Unless you have everything spiritual or everything material, you have the problem of interaction. What is the 'lever' that the non-material uses to move the material, or vice versa. It is practically impossible to explain how free will can affect the brain and body. If you say God does it, you have raised a personal problem to a cosmic one but the problem remains – how can the non-physical interact with the physical? Free will is of little use in explaining our decision process. We make our decisions rather than having them dictated to us but it is physical processes in the brain that really do the decision making, not magic. And we want our decisions to be relevant, effective and in contact with the physical world, not ineffective. We actually want a 'lever' on the material world. Decisions taken in some sort of causal vacuum are of no use to us.

 

 

The question we want answered

 

Just because philosophers pose questions and argue various answers does not mean that they are finding answers. No, they are make clear the logical ramifications of questions and each answer. This is a useful function and not to be undervalued, but it is not a process that gives robust answers. As an example, we have Zeno's paradox about the arrow that can never landing because its distance to landing can always be divided in half, but on the other hand, the knowledge that it does actually land. Philosophers used to argue about how to treat this paradox, but they never solved it. It lost its power when mathematics developed the concept of the sum of a infinite series. When the distance is cut in half, so is the time. When the infinite series of remaining distance reaches zero so does the series of time remaining. We do not know how to end an infinite series but we know where it ends and when it ends – on the ground the moment the arrow hits it. The sum of an infinite series can still be considered somewhat paradoxical but as an obscure mathematical question. Generally, philosophers are no longer very interested in the Zeno paradox, certainly not its answer. Philosophy is useful but not because it supplies consensus answers. Mathematics, science and their cousins, like history, supply answers. Philosophy has set up a dichotomy between free will and determinism and explored each idea to exhaustion but not with any consensus about which is correct. That is not the point of philosophy. Science has to rephrase the problem as, 'how exactly are decisions made?' That is the question we need an answer to, a robust consensus answer.

 

 

But here is the rub

 

This move to a scientific answer is disturbing to very many people because the answer is assumed to have effects on our notions of morals, responsibility and identity. Civilization as we know it may fall apart. Exactly how we think we make decisions once we study the question without reference to determinism or freewill seems OK. But if the answer robs us of morals, responsibility or identity, than it is definitely not OK. Some people have the notion that what we should do is just pretend that we have free will, while knowing that our actions are determined. To me this is silly: believe two incompatible and flawed ideas at the same time rather than believe a better, single idea. It reminds me of the solution proposed to deal with Copernicus – use the new calculations while believing that the earth does not revolve. Of course, we do not yet have the scientific answer (far from it) although we think we can see the general gist of it. So we cannot say how it will affect society. I personally feel that it will not affect us negatively but that is just a personal opinion. Neuroscience will continue to grow and we will soon have a very good idea of how we actually make decisions, whether this knowledge is welcomed or not. It is time we stopped worrying about determinism and free will and started preparing ourselves to live with ourselves and others in a new framework.

 

 

Identity, Responsibility, Morals

 

We need to start thinking of ourselves as whole beings, one entity from head to toe: brain and body, past and future, from birth to death. Forgot the ancient religious idea of a mind imprisoned in a body. We have to stop the separation of me and my body, me and my brain. Me has to be all my parts together, working together. Me cannot equate to consciousness alone.

 

Of course I am responsible for absolutely everything I do including something I do while sleep walking. Further a rock that falls from a cliff is responsible for blocking the road. It is what we do about responsibility that differs. We remove the rock but we do not blame or punish it. We try to help the sleep walker overcome the dangers of sleep walking to himself and others. But if I as a normal person hit someone in the face, my responsibility is not greater than the rock or the sleep walker but my treatment will be much, much different. I am expected to maintain my decision-making apparatus in good working order. The way the legal system will work might be a little different from now, but not much. People will be expected to know and follow the rules of society.

 

I think of moral questions as those for which there is no good answer. All courses of action and of inaction are bad in a moral question. Often because the possible answers pit the good of the individual against the good of the group, but also pit different groups and their interests against each other. No matter what we believe about how decisions are made, we are still forced to make them and that includes moral ones. The more we know about decisions, the more likely we are to make moral decisions we are proud of (or least guilty or ashamed of), but there is no guarantee. There is still a likelihood that we will just muddle along trying to find the lesser of two evils with no more success than at present.

 

 

Why should we believe that being closer to the truth or having a more accurate understanding is going to make things worst rather than better? Shouldn't we welcome having a map that is closer to the territory? It is time to be open to ideas outside the artificial determinism/freewill dichotomy.

 

New Comment
163 comments, sorted by Click to highlight new comments since: Today at 8:14 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Um.

Sometime ago I posted to decision-theory-workshop an idea that may be relevant here. Hopefully it can shed some light on the "solution to free will" generally accepted on LW, which I agree with.

Imagine the following setting for decision theory: a subprogram that wants to "control" the output of a bigger program containing it. So we have a function world() that makes calls to a function agent() (and maybe other logically equivalent copies of it), and agent() can see the source code of everything inclucing itself. We want to write an implementation of agent(), without foreknowledge of what world() looks like, so that it "forces" any world() to return the biggest "possible" answer (scare quotes are intentional).

For example, Newcomb's Problem:

def world():
   box1 = 1000
   box2 = (agent() == 2) ? 0 : 1000000
   return box2 + ((agent() == 2) ? box1 : 0)

Then a possible algorithm for agent() may go as follows. Look for machine-checkable mathematical proofs (up to a specified max length) of theorems of the form "agent()==A implies world()==U" for varying values of A and U. Then, after searching for some time, take the biggest found val... (read more)

0JanetK14y
Is there any way that this applies to me or you making a decision? If it does can you give an indication of how. Thanks.
0Blueberry14y
This is brilliant. This needs to be a top-level post.
0cousin_it14y
Done. I'm skeptical that it will get many upvotes, though.
0Wei Dai14y
You seem to be either pathologically under-confident (considering that the comment your post was based on was voted up to 9, and people were explicitly asking you to make a top post out of it), or just begging for votes. :)
0cousin_it14y
It's a little bit of both, I guess.
0[anonymous]14y
I'm nervous about reposting stuff from the workshop list as top-level posts on LW. I'm a pretty minor figure there and it might be seen as grabbing credit for a communal achievement. Yeah, this specific formalization is my idea, which builds on Nesov's idea (ambient control), which builds on Wei Dai's idea (UDT), which builds on Eliezer's idea (TDT). If the others aren't reposting for whatever reason, I don't want to go against the implied norm. (The recent post about Löbian cooperation wasn't intended for the workshop, but for some reason the discussion there was way more intelligent than here on LW. So I kinda moved there with my math exercises.)
1jimrandomh14y
It is much more likely that people aren't posting because they haven't thought of it or can't be bothered. I too would like to see top-level posts on this topic. And I wouldn't worry about grabbing credit; as long as you putting attributions or links in the expected places, you're fine.
0cousin_it14y
Sorry for deleting my comment. I still have some unarticulated doubts, will think more.
2Vladimir_Nesov14y
For a bit of background regarding priority from my point of view: the whole idea of ADT was "controlling the logical consequences by deciding which premise to make true", which I then saw to also have been the idea behind UDT (maybe implicitly, Wei never commented on that). Later in the summer I shifted towards thinking about general logical theories, instead of specifically equivalence of programs, as in UDT. However, as of July, there were two outstanding problems. First, it was unclear what kinds of things are possible to prove from a premise that the agent does X, and so how feasible brute force theories of consequences were as a model of this sort of decision algorithms. Your post showed that in a certain situation it is indeed possible to prove enough to make decisions using only this "let's try to prove what follows" principle. Second, maybe more importantly, it was very much unclear in what way one should state (the axioms of) a possible decision. There were three candidates to my mind: (1) try to state a possible decision in a weaker way, so that the possible decisions that aren't actual don't produce inconsistent theories, (2) try to ground the concept (theory) of a possible decision in the concept of reality, where the agent was built in the first place, which would serve as a specific guideline for fulfilling (1); and (3) try to live with inconsistency. The last option seemed less and less doable, the first option depended on rather arbitrary choices, and the second is frustratingly hairy. However, in a thread on decision-theory-workshop, your comments prompted me to make the observation that consequences always appear consistent, that one can't prove absurdity from any possible action, even though consequences are actually inconsistent (which you've reposted in the comment above). This raises the chances for option (3), dealing with inconsistency, although it's still unclear what's going on. Thus, your input substantially helped with both problems.
0[anonymous]14y
Thanks, but after my last post I don't think there's enough informed interest here for this kind of stuff. Pretty much everyone who could take the ideas further is already participating in the workshop. Besides, even though this particular formalization may belong to me, UDT is Wei Dai's idea and I leave it up to him to report our progress elsewhere.
-1Peterdjones13y
It is not news that, with ingenuity, (apparent) Alternative Possibilities can be accommodated within determinism. It is even less news that Alternative Possibilities can be accommodated (without the need for ingenuity) within indeterminism. The question is why the determinism based approach is seen around here as "the" solution, when the evidence for the actual existence of (in)determinism remains unclear.
4cousin_it13y
Indeterminism can accommodate "alternate possibilities", but it cannot accommodate meaningful choice between them. As Eliezer said: Also, starting from "extreme determinism" has been very intellectually fruitful for me. As far as I know, the mathematical part of my comment above (esp. the second to last paragraph) is new - no philosopher had generated it before. If I'm mistaken and your words about it being "not news" have any substance, please give a reference.
-2Peterdjones13y
"Some patches of determinsim" is perfectly compatible with "some patches of indeterminism". We need more-or-less determinism to carry out decisions, but that does not mean it is required to make them. The second part of EY;s comment is too vague. If I am being controlled by "physics" outside my body, I am un-free. I am not unconditionally free just because I am physical.
1cousin_it13y
That sounds inconsistent. What's the relevant difference between the two activities? They look like the same sort of activity to me. Both require making certain things correlate with other things, which is what determinism does. (Carrying out a course of action introduces a correlation between your decision and the outside world; choosing a course of action introduces a correlation between your prior values and your decision.)
-2Peterdjones13y
The difference is that if we tried to carry out decisions indeterministically, we wouldn't get the results we wanted; and if we made decisions determistically, there would be no real choice. It's a two stage model
0cousin_it13y
I don't understand this statement. Isn't it drawing factual conclusions about the universe based on what sort of choice some philosophers wish to have? Or do you trust the subjective feeling that you have "real choice" without examining it? Both options seem unsatisfactory...
-2Peterdjones13y
Determinism does not enforce rationality. There are more choices than choices about what to believe. Since naive realism is false, we need to freely and creatively generate hypotheses before testing them.
1cousin_it13y
The part of your mind that generates hypotheses is no less deterministic than the part that tests them. (It's not as if they used different types of neurons!) The only difference is that you don't have conscious access to the process that generates hypotheses, so it looks mysterious and you complete the pattern that mysterious=indeterministic. But even though you can't introspect that part of yourself, you can still influence what options it will offer you, e.g. by priming).
-1Peterdjones13y
Maybe the two stages are in a time domain, not a space domain. The "it only seems indeterministic" story is one of a number of stories. It is not a fact. My central point is that to arrive at The Answer, all alternatives have to be considered.
3cousin_it13y
I was mostly trying to argue against the point that human minds need indeterminism to work as they do. Do you now agree that's wrong?
-2Peterdjones13y
It's not wrong, and it;'s not intended as a mirror-image of the LW official dogma. It's a suggestion. I cannot possibly say it is The Answer, since, for one thing, I don't know if indeterminsim is actually the case. So my central point remains: the solution space remains unexplored, and what I put forward is an example of a neglected possibillity
2AlephNeil13y
This is equally far from being news: If physics randomly decides whether an agent in state S at time t will evolve into state A or state B at time t+dt, then the cause of "A rather than B" cannot be the agent's preferences and values, or else these would already have been different at time t. The agent could not be held morally accountable for "A rather than B" (assuming S were known to the judge). Indeterminism being present in the 'cogs and gears' of the agent is more like an erosion of personal autonomy than a foundation for it. If the 'problem of free will' has a solution (resp. dissolution) at all, then it can be solved (resp. dissolved) under the assumption of physical determinism.
-2Peterdjones13y
"Physics chooses" is vague. An agents physical state will evolve under the laws of physics whether they are deterministic or not. If an agents state never contained the slightest inkling of committing murder, for instance, then they will not choose to do that --deterministically or not. A choice, random or not, can only be made from the options available, and will depend on their values or preferences. That FW can be dissolved under determinism does not mean it should be disolved under determinism or disolved at all. A case has to be made for dissolution over solution.
0AlephNeil13y
It means nothing other than "a Laplacean superbeing, given complete knowledge of the prior state and of the laws of physics, would calculate that at time t+dt, the state of the system will either be A with probability p or B with probability 1-p". (You can see why I tried not to write all of that out! Although this may have been unwise given that you've now made me do just that.) Complete knowledge of the prior state includes complete knowledge of the agent. Hence, there is no property of the agent which explains why A rather than B happens. The Laplacean superbeing has already taken all of the agent's reasons for preferring A (or B) into account in computing its probabilities, so given that those were the probabilities, whatever ultimately happens has nothing to do with the agent's reasons. You should read chapter VII of Nagel's book The View From Nowhere. He explains very clearly how the problem of free will arises from the tension between the 'internal, subjective' and 'external, objective' views of a decision. From the 'external, objective' view, freedom in the sense you want inevitably disappears regardless of whether physics is deterministic.
-2Peterdjones13y
The explanation about the Laplacian Daemon does not take into account the fact that the very varied pre-existing states of people's minds/brains has a major influence on their choices. Physics cannot make them choose something they never had in mind. Their choices evolve out of their dispositions under both determinism and indeterminism. If the choice between A and B is indeterministic, it is indeterministic, but the particular values of A and B come from the particular agent. Whatever happens has a huge amount to do with those reasons since your personified "physics" cannot implant brand new reasons ex nihilo. I am quite capable of arguing my case against Nagel or anybody else.
0AlephNeil13y
Imagine a 'coarse-grained' view of the agent, where we don't ask what's inside the agent's head. Then the agent has a huge spectrum of possible actions - our uncertainty about the action taken is massive. Finding out what's inside the agent's head resolves either 'most' or 'all' of the uncertainty, according as physics is indeterministic or deterministic respectively. If physics is indeterministic then some uncertainty remains, and the resolution of this uncertainty cannot be explained by reference to the agent's preferences, and cannot serve as a meaningful basis for freedom. The point is: that extra bit of uncertainty on the end, which you only get with indeterministic physics, doesn't give any extra scope whatsoever for 'free will' or 'moral responsibility'. I heartily agree with you that I can't figure out why you're making disagreement noises while putting forward the same exact view as mine!
-2Peterdjones13y
Some irresoluble uncertainty about what an agent will do is the only meaningful basis for freedom. (Other solutions are in fact disolutions) The point is how an agent can have that freedom without complete disconnection of their actions from their character, values, etc. The answer is to pay attention to quantifiers. Some indeterminism does not mean complete indeterminism, and so does not mean complete disconnection.
0AlephNeil13y
Sorry but I think that's confused, for reasons I've already explained. Honestly, you'd enjoy reading Nagel. If it helps, he's an anti-reductionist just like you, who doesn't think in terms of 'dissolving' philosophical problems.
-2Peterdjones13y
I didn't say I was anti reductionist. I find this us-and-them stuff rather annoying.
0AlephNeil13y
OK. Replace the word "who" with "in that he" in my previous comment.
-2Peterdjones13y
I don't mind dissolving prolbems if all else fails. But you cannot reduce everything to nothing.

Please try to write posts that show an awareness of the existing literature on the subject from within Less Wrong.

0kpreid14y
Yes; I think this post has explanations where it ought to have hyperlinks.

I generally like this post, and am unsure why it was voted down. However, I think that you need to separate "not useful" from "not true"- while it may or may not be true that neither is particularly useful in real life, under the definitions accepted by LW, both are almost certainly true.

1JanetK14y
I meant to add - thanks for the advice to separate 'not useful' from 'not true'.
1JanetK14y
Do LW people generally think freewill is true? I had thought that we were generally materialist and didn't believe in magic mind stuff. Am I wrong?
4Randaly14y
I believe that, as far as there is a consensus, it's that compatibilism is correct. Free will is defined as "your ability to make free choices unconstrained by external agencies." "You" has traditionally been defined as a supernatural "soul;" when it was demonstrated that "you" couldn't have any effect on the world, and probably didn't exist, many people concluded that "you" therefore had no ability to make choices, forgetting that soul-"you" didn't actually exist. Compatibilists take a different path, by redefining you as a physical object, in which case free will becomes true.
1JanetK14y
If you have defined 'freewill' as being ordinary everyday freedom to make choices without constraint than it is not the philosopher's straw man that I was talking about in the post. It does not imply dualism. This then becomes a semantic rather than a philosophical difference. I want to get rid of the word and you want to redefine it so that it is useful. But you don't need the word. You could just say 'I was free to make a choice.' Most people would think you meant 'free from external constraint'. I believe I said in the post that I was not talking about ordinary freedom from constraint but from the causality of the material world. That was the definition I was using for freewill. If there are people (you may or may not be one) who cling to the word 'freewill' and redefine it so that they can cling to it, there cannot be too many. Because the replies to this post are the first time I have encountered this new definition with any frequency. Of course, I may not have noticed that someone was using the word in a different way from the usual meaning. This is like the redefinition of God to be something like 'the whole universe' or 'the original cause' in order to not have to admit that they don't actually believe in God. I suppose that many of the people who say they believe in God would not prompt me to find out how vague their concept was.
3Oligopsony14y
I don't know how common the "free will is freedom from external constraint" view - it's called compatibilism* - is among the general population. It is, however, the dominant view among professional philosophers. If you've never so much as heard of compatibilism, I have to question why you wrote an article on the subject of free will. It would be like writing on meta-ethics and pleading ignorance of non-cognitivism or error theory. In the future, consider at least reading the relevant SEP entry! *Technically, many compatibilists believe that there are conditions other than freedom from external constraint that are necessary for free will. Definitionally none of them would say that indeterminism is one of them, though.
0JanetK14y
I am confused by the depth of feeling against my fairly mild posting which I thought many LWers would value. One of the first postings that I read on LW was How an Algorithm Feels from Inside and another was Wrong Questions. I was so impressed that I began reading the blog regularly. What I noticed was that many of the contributors seemed to have a very different idea of what thought was than I had or that I felt those two great postings had. In particular I had trouble with two recurring areas: what is consciousness? and how are decisions made?. I have attempted a post on both. The reception has been equally hostile to both. It appears that I misjudged the group and that there is very little interest in a more scientific approach to these questions. Consider the post 'dead in the water'.
2WrongBot14y
The big problem with your post is that it spends most of its words discussing free will and metaethics without making reference to the substantial material on those topics already posted and discussed on this site. As others have pointed out, not discussing compatibilism has weakened the post as well. Ultimately, if you were trying to answer the question of how decisions are made, you have should have done so. Too much of the post covered material that wasn't directly related to what you wanted to get at, and this would have been a problem even without the points mentioned above. On a related note, you didn't include any links in your post. Linking to a definition, discussion or explanation of a concept you're using as a foundation is much better than reinventing the wheel. All that said, please reconsider abandoning posting on LW. Your comments are frequently worth reading, and your reasoning (if not yet your writing) is usually pretty solid. I'm probably not the best person to make the offer, but I'd be happy to comment on drafts of future posts if you felt that might be useful.
1JanetK14y
Thank you and if I even do post, I will take you up on your offer.
0Randaly14y
I would be happy to comment as well. (Though I'm almost certainly a far worse choice.)
1JanetK14y
Thank you and if I even do post, I will take you up on your offer.
0NancyLebovitz14y
I voted this up before reading it carefully. As is usual, admission of having made a mistake should get an upvote-- if I'd read to the end first, I'd have seen the undefined claim that you're using a more scientific approach. Unfortunately, I don't seem to be able to cancel my upvote, but knocking the comment down to -1 seems too harsh.
3JanetK14y
The post in question was a plea to look at and follow the neuroscience of decision making. That was the point. Don't worry about the straw men - just follow the science. I am actually not that interested in freewill and want to get past that to something interesting. When I carefully define how I am using a word (like freewill or like consciousness in the last post) I don't expect to be told that I cannot use the word that way. I was taken back by the reaction, that is all. Here are a bunch of reasonable, rational, intelligent people that I should be able to converse with and they appear to avoid being sensible about neuroscience. Too bad - I can still gain from following the discussions but I cannot give anything to the group except the odd comment, now and then. Don't worry about the up vote - I can avoid ever using it.
3Richard_Kennaway14y
Clicking the "Vote up" link again should remove the vote.
0thomblake14y
The problem here is that you're using "free will" in a weird way. While lots of people who haven't thought about the question think libertarian free will makes sense, and lots of religious philosophers think libertarian free will makes sense, it's definitely not a prevailing view amongst non-religious people who've thought about free will to any great extent. You're ignoring the philosophical literature (about two thousand years worth, in fact), the various posts made on Less Wrong about the subject, and the general consensus of professional philosophers (at least non-religious ones) (who may or may not be a relevant reference class). Two straw men indeed. It's as though you've made a post arguing that "Calcium" doesn't exist since obviously it refers to its linguistic roots in alchemy, and scientists should get right on finding out what Calcium really is, and you don't know why anyone thinks that's a silly suggestion. I don't think anyone here thinks the neuroscience of decision-making is not a fruitful path of research, but this post did nothing of the sort. If you have interesting results to share from your work in that field, please do so - I'm sure there are several other readers who work in the same sort of field who would like to compare notes.
3XiXiDu14y
I haven't read it yet but "this impossible question is fully and completely dissolved on Less Wrong". I do believe that free will is true, or rather a useful terminology, given my own definition. ‘Free will’ is often defined as want free FROM cause. But why shouldn’t ‘free will’ be defined as want free TO cause? Any measure of ‘free will’ must be based on the effectiveness and feasibility of consciousness volition opposed to the strenght of the environmental influence. We have to fathom the extent of active adaption of the environment by a system opposed to passive adaption of a system by the the environment. The specific effectiveness and order of transformation by which the self-defined system (you) shapes the outside environment, in which it is embedded, must trump the environmental influence on the defined system. What is essential is that the system has to be goal-oriented and the ability to differentiate itself within the environment in which it is embedded. What I mean is very simple. If I could get what I want I have had free will. In retrospect the degree of freedom of want is measured by the extent to which I had to adapt my will to environmental circumstances opposed to changing the environment to suit my goals. And basically this is what I mean by ‘free will’. To extent this notion of free will you can ‘measure’ the extent to which one changed his will deliberately, that is consciously, i.e. from within (nonlinear). By nonlinear here I mean a system whose output is not proportional to its input. This is opposed to the ‘persuasion’ of a child by an adult or the contrary affection of one’s will by unwanted, non-self-regulated influence of any kind. (Edit Note: I'm not the usual highly educated LW reader. This might be a lot of garbage indeed. Ask me about it in a few years again.)
1JanetK14y
I am sorry. I honestly find it very hard to understand what you are trying to say and more importantly why. Honestly, my fault but I don't get it.
0XiXiDu14y
I'm saying that to talk about free will we first have to define what we mean by 'free will'. Further I give a definition of what I mean, how I define the term 'free will'. I define 'free will' as something universal that gradually exists on different levels. I define 'free will' as a measure of goal realization. That is, free will of a child < adult < superhuman artificial intelligence. Except if you are jailed, you might have less free will than a kid. I believe that our feeling of being free agents represents the extrapolated and retrospective perception of goal realization and not what is talked about in metaphysics, that our intentions are free from cause. It's rather that our ability to cause, to realize our intentions can be and is gradually perceived to be free.
2wedrifid14y
For my part I think any philosopher (or teacher of philosophy) that trains themselves or their students into considering the truthfulness of freewill deserves a spanking. I'm not sure what the official name for that position is.
2Jayson_Virissimo14y
Metaphysicians call that view "libertarianism" (what a confusing name huh?). Basically, libertarianism, is the view that free will and determinism are incompatible, but we have free will, so materialism is false.
3Oligopsony14y
Not all libertarians reject materialism - there is the view (not mine; I'm a compatibilist) that indeterminism in physical laws is sufficient for libertarian free will.
0Jayson_Virissimo14y
You are right. My last sentence should have read: Basically, libertarianism, is the view that free will and determinism are incompatible, but we have free will, so determinism is false.
2wedrifid14y
(I don't think we are talking about the same thing. My view is, approximately, "contemptuous compatibilism".)
0[anonymous]14y
Why do people always fall back to philosophy when talking about free will. It doesn't need to be a metaphysical concept. It is pretty much a human trait, an attribute of human psychology. We all know we have free will, period. — Steven Landsburg paraphrasing Robert Nozick in The Big Questions This is not a bias, it's part of our subjective definition of being agents that are able to change their environment as it suits them. Taking an outside view, I absolutely agree. There is no free will, no reasonable definition will fit those two words in succession. But from a inside view, it makes sense to talk about being free to choose. Anyone who's not sure what I mean I recommend reading this post:
0wedrifid14y
Are you being serious or sarcastic here? I'm confused.
0[anonymous]14y
Now that was a unsettling reply. I like to believe into a timeless universe. So I guess I'm not completely serious. But all this is quickly leaving the intention of this community. There's probably not much practical value to be found in such musings and beliefs besides a poetic appeal and fun of thinking and dreaming about nonfactual possibilities. I really have to think more and especially not publicly claim something when I'm too tired. I might consider a tattoo on the back of hands. Think first! What I rather wanted to say is, it makes sense to talk about being able to realize your goals. Choice doesn't exist, I contradicted myself there. I should quit now and for some time stop participating on LW. I have to continue with my studies. I was only drawn here by the deletion incident. Replies and that it is fun to to argue have made me babble too much in the past few days. Back to being lurker. Thanks.
1wedrifid14y
It certainly does.
2thomblake14y
What XiXiDu said - "free will" is assigned as a problem for aspiring rationalists to solve, and I really don't see a problem with trying to do so in a top-level post, so I voted up this post in hopes of seeing it out of the negatives. I think the general view around here is vaguely compatibilist, but only in the sense of realizing that the free will question is asking the wrong question, and I'd rather not give away more than that if you haven't worked it out.
1JanetK14y
I agree that the question is the wrong question. And I assume that my saying forget the straw men of determinism and freewill and get on with the real question of how we actually make decisions was fairly clear. My emotional reaction to compatibilism is that it is a bit of a cope out. It attacks neither determinism or freewill and does not ask the scientific based question, which is the one that can in future be answered. But I certainly think it is an improvement over the old, old argument.

I would like to see reference to "dissolving the question".

1JanetK14y
Thanks

To predict anything really sizable, like for instance, how the earth came to be as it is, or even how little-old-me became what I am, or even why I did a particular thing a moment ago, would take more resources and time than can be found in the life of our universe.

Surely if the universe is deterministic, "the resources and time [that] can be found in the life of the universe" provides an upper bound on what you need to predict anything. Since, after all, the universe is 'predicting' it just fine without exceeding those requirements.

1JanetK14y
Yes, your right. I should put close to the life of the universe.

There's plenty of posts to be written on this topic, but this one needs a bit more work. References to the sequences, especially the Reductionism and Mind Projection Fallacy sections and stating what parts you disagree with or are expanding upon is a good idea.

More specific problems with the post:

  • I don't think "a robust consensus answer" is what I'm hoping for - I'd rather have actual truth (though I appreciate when truth and consensus converge, it can take awhile).

  • I like the rock analogy a lot, but you don't go far enough. We remove the ro

... (read more)
1JanetK14y
I read Reductionism and Mind Projection Fallacy some time ago. I liked them and I don't think that what I am saying here disagreed or expanded on those pieces. I will read them again to see if I now feel differently about it. Perhaps I need to make it clearer but the section on responsibility, morality and identity is not meant to say much about these issues other than that there is little reason to think that our society is going to be damaged by what science may say in the future about decision making.

The illusion of free will is an artifact of the incomplete knowledge of the mind's knowledge of the brain. It is not practical for an organism to evolve a brain to take be aware of its own functioning on the physical level of the decision making. An accurate simulation of a mind need not rely on a brain at all. We hope.

While we can say that actions are a result of purely physical processes, it is necessary to create abstract models of other people's projected actions in order to influence them. In recent years we have developed electromagnetic methods of o... (read more)

1JanetK14y
I assume that neurology and psychology will converge or one of them will be forgotten by the way side. I don't see how bodies and brains can be temporary vehicles for consciousness. If you mean that the state of our brains can be transferred to a machine in some sort of readout and that machine is capable of consciousness - then that machine becomes our brain/body replacement. A disembodied consciousness is something I cannot imagine.
2wedrifid14y
Could psychology not be found to be still useful when considering human behaviors at a different level of abstraction (or, indeed, with different forms of experimentation)?
1JanetK14y
Yes, of course, psychology and neurology could exist together using different levels of abstraction and different methods - like physics and chemistry. But if they disagree on fundamentals and cannot converge then I don't think they can stay that way for long.
0KrisC14y
Agreed concerning the need for a processing platform. Not so sure about the convergence of psych and neuro for same reason. If the same psych rules can apply to a consciousness regardless of platform, then neurology not applicable in that case.

@ orthonormal

you said- I agree. But I think that there is actually some feature of the (deterministic) act of choosing which leads a person to falsely believe that their choice is nondeterministic, and that by analyzing this we learn something interesting and important about cognition.

Very true. so what do you make of reconciling the two? Do we castigate them both in hopes of finding something out that is hiding in the shadows? The nexus of the matter is "belief" and in order to have a sound belief one should know as many facts about the subject... (read more)

4ata14y
Please post your comments as replies (click "Reply" on the comment you're responding to) instead of posting them as top-level comments to the post.
0orthonormal14y
Hey, I actually meant for the conversation to move to the post you were quoting earlier. Here's my reply to you.

@Thomblake sorry about the message thing. Im still getting used to how this site works..

You substantiate analogies with proof. Basically im saying that your analogies don't hold water perhaps i'm using confusing vernacular.

Let me say one thing before moving on. I hate debating just to debate, for me when i involve myself in a debate it is to gain more insight. So i am totally open to your point of view if it sheds some light on this subject, the bottom line is if someone has a solid angle that i'm missing than i welcome it.

Ok that being said. it sounds ... (read more)

2ata14y
Under your definition of free will, then what observations, if true, would be evidence for its existence? That is, what would free will (as you understand it) actually imply about empirical reality, and what would its absence imply?
0sereboi14y
Any shown tangible research that an agent can manipulate and control with little effort their subconscious mind. The presence of, would imply a host of things from complete agent responsibility in all areas of life. The absence of it would not only imply severed liability but also complete meaninglessness. Most branches of existential philosophy solve meaninglessness by stating one has control over their choices and so creating meaning, If one is stripped of that control than meaninglessness truly abounds. Of course that is unless one believes in God.
2wedrifid14y
It's that "little effort" part that makes this an entirely different question. I don't use the term myself but "Free Will" is not always used to imply that things are easy.
0sereboi14y
The reason i said "little effort" is to clarify that one could possibly with much concentration have an effect on the subconscious, However the kind of effect im concerned with is the act of everyday choices that happen in nano seconds. I would welcome some data on "much effort" effects as well.
2wedrifid14y
I understand what you are trying to do, and suspect I even approximately agree with you regarding predictions about just how relevant our conscious thought is to our decision making. I just note that this is a different question to the one you were arguing against. People sign themselves up for rehab. Occasionally it works.

Ok i finally get the etiquette thing of this system. :)

Sorry i am a straight shooter. I will work on my wording, however i still stand by my claims of conjecture vrs facts.

I think of moral questions as those for which there is no good answer. All courses of action and of inaction are bad in a moral question. Often because the possible answers pit the good of the individual against the good of the group, but also pit different groups and their interests against each other.

What do groups have to do with anything? They don't make decisions, people do. If a particular individual is a consequentialist, they should take whichever action is expected to produce the most utility. The truth or usefulness of determinism and free will might influence how we think about assigning moral blame or praise, but they don't tell us what we should do.

There are two false assumptions in the above: 1) that the universe runs on physical laws does not mean it necessarily runs on deterministic laws.

2) Following from that, since laws are not necessarily deterministic, libertarian free will, does not necessarily involve overriding them. Libertarian free will could be found within an indeterministic (but otherwise throughly physical and material) universe.

4CuSithBell13y
My understanding is that the standard dilemma for libertarian free will is that your decisions seem to have to ground out in randomness or determinism, so I don't think indeterministic laws save the concept.
-2Peterdjones13y
That is the standard objection and I (unusually) think it can be resisted. To say the least, if you are going to claim to have "the" answer", you have to thoroughly consider all the alternatives.
2CuSithBell13y
I'd think that, given that's the standard objection, and it includes the case of indeterminism, you'd want to say more than just that indeterminism saves libertarian free will. More to the point - would you mind giving a definition of what it is that you mean by 'libertarian free will'? I've never heard it coherently stated.
-3Peterdjones13y
"Free Will is defined as "the power or ability to rationally choose and consciously perform actions, at least some of which are not brought about necessarily and inevitably by external circumstances".
2CuSithBell13y
Oh. Well, that's fine then. I usually think of libertarian free will as including internal circumstances as well.
1Manfred13y
The Copernican principle, "humans are not the center of the universe," does contradict 2, though, if you agree that ordinary randomness, e.g. measuring an electron, does not have free will. And the Copernican principle is just a restatement of Occam's razor when the competing explanations are "there is a universal physical law" and "there is a law that specifically targets humans."
-2Peterdjones13y
I do not see what you mean by the Copernican Principle. Perhaps you imagine that someone has said only humans have FW. I have not., A naturalistic libertarian can concede that indeterministic electrons don't have free will, just as a compatibilist can concede that deterministic electrons don't have FW. Neither thinks (in)determinism is a sufficient condition of FW.
2Manfred13y
True, but I am saying that if randomness is not enough to have free will (does a nondeterministic chinese room have free will?), then you would either need to replicate a compatibilist argument for how humans have free will, or have some extra laws that specify high-level concepts like free will (a.k.a. "magic").
-2Peterdjones13y
No. I need an incompatibilist argument. I need randomness plus something to be necessary for FW, and I need the something extra to be naturalistic. And I have them, too. A non deterministic CR, or other AI, could have FW, if programmed correctly. That's a consequence of naturalism.
0Manfred13y
Huh, I accidentally posted this. I thought I'd deleted it as true but irrelevant.
0Manfred13y
Ah, yeah, I was wrong.

We cannot just predict what the outcomes of our decisions will be, we really, really have to go through the whole process of making them. We cannot even pretend that decisions are determined until after we have finish making them.

And what about an AI that can predict it's own decisions (because it knows its source code)?

Also, are you a compatibilist?

2JanetK14y
I believe that a compatibilist can accept both freewill and determinism at the same time. I reject them both as not useful to understanding decisions. I think there is a difference between believing both A and B and believing neither A or B. It seems to me unlikely that an AI could predict its own decisions by examining its source code but not running the code. But I am not sure it is completely impossible just because I cannot see how it would be done. If it were possible I would be extremely surprised if it was faster or easier that just running the code.
1Unknowns14y
As I've stated before, no AI can predict its own decisions in that sense (i.e. in detail, before it has made them.) Knowing its source code doesn't help; it has to run the code in order to know what result it gets.
6wedrifid14y
I suggest that it can but it is totally pointless for it to do so. Things can be proved from source code without running it. This applies to any source code, including that of oneself. Again, it doesn't seem a particularly useful thing to do in most cases.
6Emile14y
I'm wondering why this got downvoted - it's true! For example if the top-level decision function of an AI is: def DecideWhatToDo(self, environment): if environment.IsUnderWater(): return actions.SELF_DESTRUCT else: return self.EmergentComplexStochasticDecisionFunction(environment) ... and the AI doesn't self-modify, then it can predict that it will decide to self destruct if it falls in the water, only by analysing the code, without running it (also assuming, of course, that it is good enough at code analysis). Of course, you can imagine AIs that can't predict any of it's decisions, and as wedfrid says, in most non-trivial cases, most probably wouldn't be able to. (This may be important, because having provable decisions in certain situations could be key to cooperation in prisonner's-dilemma-type situations)
0Unknowns14y
Of course that is predictable, but that code wouldn't exist in any intelligent program, or at least it isn't an intelligent action; predicting it is like predicting that I'll die if my brain is crushed.
5JoshuaZ14y
Unknowns, we've been over this issue before. You don't need to engage in perfect prediction in order to be able to usefully predict. Moreover, even if you can't predict everything you can still examine and improve specific modules. For example, if an AI has a module for factoring integers using a naive, brute-force factoring algorithm, it could examine that and decide to replace it with a quicker, more efficient module for factoring (that maybe used the number field sieve for example). It can do that even though it can't predict the precise behavior of the module without running it.
0Unknowns14y
I certainly agree that an AI can predict some aspects of its behavior.
3Emile14y
That's also because this is a simplified example, merely intended to provide a counter-example to your original assertion. Agreed, it isn't an intelligent action, but if you start saying intelligent agents can only take intelligent decisions, then you're playing No True Scotsman. I can imagine plenty of situations where someone might want to design an agent that takes certain unintelligent decisions in certain circumstances, or an agent that self-modifies in that way. If an agent can not only make promises, but also formally prove by showing it's own source code that those promises are binding and that it can't change them - then it may be at an advantage for negociations and cooperation over an agent that can't do that. So "stupid" decisions that can be predicted by reading one's own source code isn't a feature that I consider unlikely in the design-space of AIs.
-2Unknowns14y
I would agree with that. But I would just say that the AI would experience doing those things (for example keeping such promises) as we experience reflex actions, not as decisions.
0thomblake14y
Why not? In what way is it like that, and how is that relevant to the question?
0Unknowns14y
It's like that precisely because it is easily predictable; as I said in another reply, an AI will experience its decisions as indeterminate, so anything it knows in advance in such a determinate way, will not be understood as a decision, just as I don't decide to die if my brain is crushed, but I know that will happen. In the same way the AI will merely know that it will self-destruct if it is placed under water.
0thomblake14y
From this, it seems like your argument for why this will not appear in its decision algorithm, is simply that you have a specific definition for "decision" that requires the AI to "understand it as a decision". I don't know why the AI has to experience its decisions as indeterminate (indeed, that seems like a flawed design if its decisions are actually determined!). Rather, any code that leads from inputs to a decision should be called part of the AI's 'decision algorithm' regardless of how it 'feels'. I don't have a problem with an AI 'merely knowing' that it will make a certain decision. (and be careful - 'merely' is an imprecise weasel word)
0Unknowns14y
It isn't a flawed design because when you start running the program, it has to analyze the results of different possible actions. Yes, it is determined objectively, but it has to consider several options as possible actions nonetheless.
1thomblake14y
This is false for some algorithms, and so I imagine it would be false for the entirety of the AI's source code. For example (ANSI C): int i; for (i=0; i<5; i++) ; I know that i is equal to 5 after this code is executed, and I know that without executing the code in any sense.
-1MatthewB14y
Now, I am not certain about this, but we have to examine that code before we know it's outcome. While this isn't "Running" the code in the traditional sense of computation as we are familiar with it today, it does seem that the code is sort of run by our brains as a simulation as we scan it. As sort of meta-process if you will... I could be so wrong about that though... eh... Also, that code is useless really, except maybe as a wait function... It doesn't really do anything (Not sure why Unknowns gets voted up in the first post above, and down below)... Also, leaping from some code to the Entirety of an AI's source code seems to be a rather large leap.
0thomblake14y
"some code" is part of "the entirety of an AI's source code" - if it doesn't need to execute some part of the code, then it doesn't need to execute the entirety of the code.
-2Unknowns14y
That isn't an algorithm for making decisions.
3wedrifid14y
No, but note the text: It is, incidently, trivial to alter the code to an algorithm for making decisions and also simple to make it an algorithm that can predict it's decision before making it. do_self_analysis(); unsigned long i; unsigned long j; for (i=0; i<ULONG_MAX-1; i++) for (j=0; j<ULONG_MAX-1; j++); if(i > 2) return ACTION_DEFECT; return ACTION_COOPERATE; The do_self_analysis method (do they call them methods or functions? Too long since I've used C) can browse the entire source code of the AI, determine that the above piece of code is the algorithm for making the relevant decision, prove that do_self_analysis doesn't change anything or perform any output and does return in finite time and then go on to predict that the AI will behave like a really inefficient defection rock. Quite a while later it will actually make the decision to defect. All rather pointless but the concept is proved.
-4Unknowns14y
When the AI runs the code for predicting it's action, it will have the subjective experience of making the decision. Later "it will actually make the decision to defect" only in the sense that the external result will come at that time. If you ask it when it made it's decision, it will point to the time when it analyzed the code.
3wedrifid14y
You are mistaken. I consider the explanations given thus far by myself and others sufficient. (No disrespect intended beyond that implicit in the fact of disagreement itself and I did not vote on the parent.)
0Unknowns14y
The explanations given say nothing about the AI's subjective experience, so they can't be sufficient to refute my claim about that.
4wedrifid14y
Consider my reply to be to the claim: If you ask the AI when it made its decision it will either point to the time after the analysis or it will be wrong. I avoided commenting on the 'subjective experience' side of things because I thought it was embodying a whole different kind of confusion. It assumes that the AI executes some kind of 'subjective experience' reasoning that is similar to that of humans (or some subset thereof). This quirk relies on lacking any strong boundaries between thought processes. People usually can't predict their decisions without making them. For both the general case and the specific case of the code I gave a correctly implemented module that could be given the label 'subjective experience' would see the difference between prediction and analysis. I upvoted the parent for the use of it's. I usually force myself to write its in that context but cringe while doing so. The syntax of the English language is annoying.
2thomblake14y
Really? Do you also cringe when using theirs, yours, ours, mine, and thine?
2wedrifid14y
Mine and thine? They don't belong in the category. The flaw isn't that all words about possession should have an apostrophe. The awkwardness is that the pattern of adding the "s" to the end to indicate ownership is the same from "Fred's" to "its" but arbitrarily not punctuated in the same way. The (somewhat obsolete) "ine" is a distinct mechanism of creating a possessive pronoun which while adding complexity at least doesn't add inconsistency. As for "theirs, yours and ours", they prompt cringes in decreasing order of strength (in fact, it may not be a coincidence that you asked in that order). Prepend "hers" to the list and append "his". "Hers" and "theirs" feel more cringe-worthy, as best as I can judge, because they are closer in usage to "Fred's" while "ours" is at least a step or two away. "His" is a special case in as much as it is a whole different word. It isn't a different mechanism like "thine" or "thy" but it isn't "hes" either. I have never accidentally typed "hi's".
0thomblake14y
You're just reading the wrong pattern. There are simple, consistent rules: 1. When making a noun possessive, EDIT: add 's use the appropriate possessive form with an apostrophe 2. When making a pronoun possessive, use the appropriate possessive pronoun (none of which have an apostrophe) EDIT: Leaving out " Jesus' " for the moment...
2wedrifid14y
No, I'm not reading the wrong pattern. I'm criticising the pattern in terms of the objective and emotional-subjective criteria that I use for evaluating elements of languages and communication patterns in general. I am aware of the rules in question and more than capable of implementing it and the hundreds of other rules that go into making our language. The undesirable aspect of this part of the language is this: It is not even remotely coincidental that we add the "ss" sound to the end of a noun to make it possessive and that most modern possessive pronouns are just the pronoun with a "ss" sound at the end. Nevertheless, the rule is "use the appropriate possessive pronoun"... that's a bleeding lookup table! A lookup table for something that is nearly always an algorithmic modification is not something I like in a language design. More importantly, when it comes to the spoken word the rule for making *nouns possessive is "almost always add 'ss'". 'Always' is better than 'almost always' (but too much to ask). Given 'almost always' , the same kind of rule for converting them all to written form would be far superior. According to subjectively-objective criteria, this feature of English sucks. If nothing else it would be fair to say that my 'subjective' is at least not entirely arbitrary, whether or not you share the same values with respect to language.
1thomblake14y
Yes, this is definitely a difference in how we perceive the language. I don't see any inherent problem with a lookup table in the language, given that most of the language is already lookup tables in the same sense (what distinguishes 'couch' from 'chair', for instance). And it would not occur to me to have a rule for "*nouns" rather than the actual separate rules for nouns and pronouns. Note also that pronouns have possessive adjective and possessive pronoun forms, while nouns do not. They're an entirely different sort of animal. So I would not think to write "It's brand is whichever brand is it's" instead of "its brand is whichever brand is its" anymore than I would think to write "me's brand is whichever brand is me's" (or whatever) instead of "my brand is whichever brand is mine"
0wedrifid14y
I suspect the difference extends down to the nature of our thought processes. Let me see... using Myers-Briggs terminology and from just this conversation I'm going to guess ?STJ.
0thomblake14y
I tend to test as INTP/INTJ depending, I think, on whether I've been doing ethics lately. But then, I'm pretty sure it's been shown that inasmuch as that model has any predictive power, it needs to be evaluated in context... so who knows about today.
0NancyLebovitz14y
There's one more rule-- if the noun you're making possessive ends with an s (this applies to both singular and plural nouns), just add an apostrophe.
0thomblake14y
That's not exactly true, and I didn't think it had terribly much bearing to my point on account of we're talking about pronouns, but I'll amend the parent.
2dclayh14y
Indeed, and while we're on the subject of idiolects: my preference is for the spelling to follow the pronunciation. Hence either "Charles's tie" or "Charles' tie" is correct, depending on how you want it to be pronounced (in this case I usually prefer the latter option, but the meter of the sentence may sometimes make the other a better choice).
-2Unknowns14y
"If you ask the AI when it made its decision it will either point to the time after the analysis or it will be wrong." I use "decision" precisely to refer the experience that we have when we make a decision, and this experience has no mathematical definition. So you may believe yourself right about this, but you don't have (and can't have) any mathematical proof of it. (I corrected this comment so that it says "mathematical proof" instead of proof in general.)
5Emile14y
I think most people on LessWrong are using "decision" in the sense used in Decision Theory. Making a claim, and then, when given counter-arguments, claiming that one was using an exotic definition seems close to logical rudeness to me.
5wedrifid14y
It also does his initial position a disservice. Rereading the original claim with the professed intended meaning changes it from "not quite technical true" to, basically, nonsense (at least in as much as it claims to pertain to AIs).
-4Unknowns14y
I don't think my definition is either exotic or inconsistent with the sense used in decision theory.
5wedrifid14y
You defined decision as a mathematical undefinable experience and suggested that it cannot be subject to proofs. That isn't even remotely compatible with the sense used in decision theory.
-5Unknowns14y
4thomblake14y
If you believe that we can't have any proof of it, then you're wasting our time with arguments.
-4Unknowns14y
You might have a proof of it, but not a mathematical proof. Also note that your comment that I would be "wasting our time" implies that you think that you couldn't be wrong.
1wedrifid14y
How many legs does an animal have if I call a tail a leg and believe all animals are quadrupeds?
-1[anonymous]14y
How many legs does a dog have if I call a tail a leg?
1thomblake14y
No, but surely some chunks of similarly-transparent code would appear in an algorithm for making decisions. And since I can read that code and know what it outputs without executing it, surely a superintelligence could read more complex code and know what it outputs without executing it. So it is patently false that in principle the AI will not be able to know the output of the algorithm without executing it.
1Unknowns14y
Any chunk of transparent code won't be the code for making an intelligent decision. And the decision algorithm as a whole won't be transparent to the same intelligence, but perhaps only to something still more intelligent.
0thomblake14y
Do you have a proof of this statement? If so, I will accept that it is not in principle possible for an AI to predict what its decision algorithm will return without executing it. Of course, logical proof isn't entirely necessary when you're dealing with Bayesians, so I'd also like to see any evidence that you have that favors this statement, even if it doesn't add up to a proof.
0Unknowns14y
It's not possible to prove the statement because we have no mathematical definition of intelligence. Eliezer claims that it is possible to create a superintelligent AI which is not conscious. I disagree with this because it is basically saying that zombies are possible. True, he would say that he only believes that human zombies are impossible, not that zombie intelligences in general are impossible. But in that case he has no idea whatsoever what consciousness corresponds to in the physical world, and in fact has no reason not to accept dualism. My position is more consistent: all zombies are impossible, and any intelligent being will be conscious. So it will also have the subjective experience of making decisions. But it is essential to this experience that you don't know what you're going to do before you do it; when you experience knowing what you're going to do, you experience deciding to do it. Therefore any AI that runs code capable of predicting its decisions, will at that very time subjectively experience making those decisions. And on the other hand, given that a block of code will not cause it to feel the sensation of deciding, that block of code must be incapable of predicting its decision algorithm. You may still disagree, but please note that this is entirely consistent with everything you and wedrifid have argued, so his claim that I have been refuted is invalid.
2Randaly14y
As I recall, Eliezer's definition of consciousness is borrowed from GEB- it's when the mind examines itself, essentially. That has very real physical consequences, so the idea of non-conscious AGI doesn't support the idea of zombies, which require consciousness to have no physical effects.
0Unknowns14y
Any AGI would be able to examine itself, so if that is the definition of consciousness, every intelligence would be conscious. But Eliezer denies the latter, so he also implicitly denies that definition of consciousness.
0JoshuaZ14y
I'm not sure I am parsing correctly what you've wrote. It may rest with your use of the word "intelligence"- how are you defining that term?
0Unknowns14y
You could replace it with "AI." Any AI can examine itself, so any AI will be conscious, if consciousness is or results from examining itself. I agree with this, but Eliezer does not.
0LucasSloan14y
Yes we do, ability to apply optimization pressure in a wide variety of environments. The platonic ideal of which is AIXI.
0torekp14y
Can you please provide a link?
0Eliezer Yudkowsky14y
http://lesswrong.com/lw/x5/nonsentient_optimizers/
0torekp14y
Thank you. I agree with Eliezer for reasons touched on in my comments to simplicio's Consciousness of simulations & uploads thread.
0thomblake14y
I don't have any problem granting that "any intelligent being will be conscious", nor that "It will have the subjective experience of making decisions", though that might just be because I don't have a formal specification of either of those - we might still be talking past each other there. I don't grant this. Can you elaborate? I'm not sure that's true, or in what sense it's true. I know that if someone offered me a million dollars for my shoes, I would happily sell them my shoes. Coming to that realization didn't feel to me like the subjective feeling of deciding to sell something to someone at the time, as compared to my recollection of past transactions. Okay, that follows from the previous claim. If I were moved to accept your previous claim, I would now be skeptical of the claim that "a block of code will not cause it to feel the sensation of deciding". Especially since we've already shown that some blocks of code would be capable of predicting some decision algorithms. This follows, but I draw the inference in the opposite direction, as noted above.
0Unknowns14y
I would distinguish between "choosing" and "deciding". When we say "I have some decisions to make," we also mean to say that we don't know yet what we're going to do. On the other hand, it is sometimes possible for you to have several options open to you, and you already know which one you will "choose". Your example of the shoes and the million dollars is one such case; you could choose not to take the million dollars, but you would not, and you know this in advance. Given this distinction, if you have a decision to make, as soon as you know what you will or would do, you will experience making a decision. For example, presumably there is some amount of money ($5? $20? $50? $100? $300?) that could be offered for your shoes such that you are unclear whether you should take the offer. As soon as you know what you would do, you will feel yourself "deciding" that "if I was offered this amount, I would take it." It isn't a decision to do something concretely, but it is still a decision.