[ Question ]

What is the subjective experience of free will for agents?

by G Gordon Worley III1 min read2nd Apr 202019 comments

10

Ω 5

CausalityConsciousnessDecision TheoryFree Will
Frontpage
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

Jessica recently wrote about difficulties with physicalist accounts of the world and alternatives to logical counterfactuals. In my recent post about the deconfusing human values research agenda, Charlie left a comment highlighting that my current model depends on a notion of "could have done something else" to talk about decisions.

Additionally, I have a strong belief that the world is subjectively deterministic, i.e. that from my point of view the world couldn't have turned out any other way than the way it did because I only ever experience myself to be in a single causal history. Yet I also suspect this is not the whole story because it appears the world I find myself in is one of many possible causal histories, possibly realized in many causally isolated worlds after the point where they diverge (i.e. a non-collapse interpretation of quantum physics).

So this leaves me in a weird place. When thinking about values, it often make sense to think about the downstream effects of values on decisions and actions and in fact many people try to infer upstream values from observations of downstream behaviors, yet the notion of "deciding" implies there was some choice to make, which I think maybe there wasn't. Thus I have theories that conflict with each other yet seek to explain the same phenomena, so I'm confused.

Seeking to see through this confusion, what are some ways of reconciling both the experience of determinism and the experience of freedom of choice or free will?

Since this has impacts on how to think about decision theory, my hope is that people might be able to share how they've thought about this question and tried to resolve it.

New Answer
Ask Related Question
New Comment

4 Answers

It's a great post, just doesn't quite go far enough...

2G Gordon Worley III1yI agree. I think Jessica does a good job of incidentally capturing why it doesn't [https://www.lesswrong.com/posts/yBdDXXmLYejrcPPv2/two-alternatives-to-logical-counterfactuals] , but to reiterate: * Eliezer is only answering the question of what the algorithm is like from the inside; * it doesn't offer an complete alternative model, only shows why a particular model doesn't make sense; * and so we are left with the problem of how to understand what it is like to make a decision from an outside perspective, i.e. how do I talk about how someone makes a decision and what a decision is from outside the subjective uncertainty of being the agent in the time prior to when the decision is made. Finally, I don't think it totally rules out the possibility of talking about possible alternatives, only talking about them via a particular method, and thus maybe there is some other way to have an outside view on degrees of freedom in decision making after a decision has already been made.
2shminux1yI am confused... Are you asking how would Omega describe someone's decision-making process? That would be like watching an open-source program execute. For example, if you know that the optimization algorithm is steepest descent, and you know the landscape it is run on, you can see every step it makes, including picking one of several possible paths.
4G Gordon Worley III1yEssentially yes, but with the caveat that I want to find a model in which to frame that description that doesn't require constant appeal to subjective experience to explain what a decision is while also not knowing what the program will do until it's done it (no hypercomputation) and is not dependent on constantly modeling Omega's uncertainty. Maybe that's too much to ask, but it's annoying to constantly have to frame things in terms of what was known at a particular time to have the notion of a decision or choice make sense, so ideally we find a framework for talking about these things that remains sensible while abstracting that detail away.
2nshepperd1yI must admit I can't make any sense of your objections. There aren't any deep philosophical issues with understanding decision algorithms from an outside perspective. That's the normal case! For instance, A* [https://en.wikipedia.org/wiki/A*_search_algorithm]
2[comment deleted]1y
1TAG1yDo we have a good reason to think an algorithm would feel like anything from the inside? Which particular model? I can't see why you shouldn't be able to model subjective uncertainty objectively.

My answer is a rather standard compatibilist one, the algorithm in your brain produces the sensation of free will as an artifact of an optimization process.

There is nothing you can do about it (you are executing an algorithm, after all), but your subjective perception of free will may change as you interact with other algorithms, like me or Jessica or whoever. There aren't really any objective intentional "decisions", only our perception of them. Therefore there the decision theories are just byproducts of all these algorithms executing. It doesn't matter though, because you have no choice but to feel that decision theories are important.

So, watch the world unfold before your eyes, and enjoy the illusion of making decisions.

I wrote about this over the last few years:

https://www.lesswrong.com/posts/NptifNqFw4wT4MuY8/agency-is-bugs-and-uncertainty

https://www.lesswrong.com/posts/TQvSZ4n4BuntC22Af/decisions-are-not-about-changing-the-world-they-are-about

https://www.lesswrong.com/posts/436REfuffDacQRbzq/logical-counterfactuals-are-low-res

Thanks, I'll revisit these. They seem like they might be pointing towards a useful resolution I can use to better model values.

2shminux1yFeel free to let me know either way, even if you find that the posts seem totally wrong or missing the point.
2G Gordon Worley III1yOkay, so now that I've had more time to think about it, I do really like the idea of thinking of "decisions" as the subjective expression of what it feels like to learn what universe you are in, and this holds true for the third-person perspective of considering the "decisions" of others: they still go through the whole process that feels from the inside like choosing or deciding, but from the outside there is no need to appeal to this to talk about "decisions". Instead, to the outside observers, "decisions" are just resolutions of uncertainty about what will happen to a part of the universe modeled as another agent. This seems quite elegant for my purposes, as I don't run into the problems associated with formalizing UDT (at least, not yet), and it let's me modify my model for understanding human values to push "decisions" outside of it or into the after-the-fact part.
4shminux1yThank you for taking your time to think about this approach, and I am happy it makes sense. I like your summary. Feel free to message me if you want to discuss this some more.

To me it seems that the world couldn't have turned out any other way, but it's useful to think about it as if it could in the moment. The decision you're ultimately making after careful consideration is the one you'd make, no matter how many times you'd rewind time. Having the ability to make a choice doesn't oppose determinism and its ramifications. With the right input you'll produce the right output.

Additionally, I have a strong belief that the world is subjectively deterministic, i.e. that from my point of view the world couldn’t have turned out any other way than the way it did because I only ever experience myself to be in a single causal history.

That seems to be a non-sequitur. The fact that things did happen in one particular way does not imply that they could only have happened that way.

Just noticed that the same error is in Possibility and Couldness:

The coin itself is either heads or tails.

That doesn’t mean it must have been whatever it was,

That seems to be a non-sequitur. The fact that things did happen in one particular way does not imply that they could only have happened that way.

This would imply multiple causal histories for exactly the same world state. This can happen in sufficiently "small" universes, like Conway's Game of Life, but it does not, as far as I know, appear to happen in ours, or if it does it happens over such large time scales that we can act as if it doesn't since we'll never encounter it. (Although I guess we could always end up having been wro... (read more)

1TAG1ySo you assuming that the world state at time T happened inevitably, and you are objecting to the idea that there is more one possible history leading up to that state. But indeterminism doens't state that the present moment happened inevitably, so what you are saying is not a genuine objection to indeterminism. And merely observing that something happened is not evidence that it happened inevitability, because inevitability is not a sense-datum.
2G Gordon Worley III1yI feel like we have a lot of evidence via our models of physics that are deterministic working that let us infer with high likelihood that the universe is deterministic. What is the specific alternative you are trying to offer (as in, what exactly does "indeterminism" mean), and what are your reasons for thinking it worth consideration?
1TAG1yI don't know whether that is supposed to mean that physics is all deterministic or mostly deterministic. But, however you feel, there are a lot of open questions, and not only about well known problem area like quantum mechanics. Saying that you personally have not supplied a good reason to believe in determinism is not equivalent to saying that determinism is false. Saying that indeterminism might hold is not saying I personally believe in it. Determinism is the theory that events occur with an objective problem ability p=1, and indeterminism is the theory that they occur with p<1..with corollaries such as the existence of real counterfactuals.