https://www.lesswrong.com/posts/8dbimB7EJXuYxmteW/fixdt
it doesn't seem particularly "post-"rational to think about the potentially self-referential structure of action-dependent beliefs
it doesn’t seem particularly “post-”rational to think about the potentially self-referential structure of action-dependent beliefs
That’s as may be. I’m responding to a class of arguments that I’ve encountered exclusively in that sort of context.
Anyhow, thanks for the link; I’ll take a look at it and think about whether it’s relevant to what I’m saying here.
Ok, I’ve now read the linked post.
As far as I can tell, the account of decision-dependent beliefs described in that post is entirely compatible with what I say here.
(The account of “belief-dependent beliefs”, if you will, is a different matter; but I make no claims about that, in this post. Also, I think that the notion of “world reacts to agent’s beliefs”, as described there and elsewhere, is confused in an important way, but that’s a discussion for another time.)
On the whole, I must admit that I’m slightly confused about what you were getting at, with that link.
hmm, I did read you to be making claims about that, which is why I linked it. in particular:
As I’ve shown, there is little material difference between a belief that’s “about the future” and one that’s “about a part of the present concerning which we have insufficient information”
There seems to me to be a legitimate difference in processes which implement decision theories that choose actions by choosing belief-dependent beliefs, in that which belief is true happens computationally after the decision of which belief to hold. in some logics this is equivalent, but some of your emphasized statements in this post don't seem as obviously justified to me as you assert them to be. But if that doesn't justify the connection to you, then I don't think I'm interpreting my intuitions into precise claims correctly, and so we won't be able to do further evaluation of correctness of the intuitions I get from a difference I see between your post and that one, at least in this thread or until I can be more specific.
Meta note - it's possible I was irrationally reacting to your use of emphasis and asserting-before-justifying, in eg, the sentence
Of course any philosopher worth his salt will find much to quarrel with, in that highly questionable account of decision-making
"of course" and "highly questionable" don't hold in their syntax a statement of first person perspective, which my brain automatically translates as a peer-pressure-backed request to update before processing the rest of the claims. I generally prefer to avoid such claims and try to parameterize first-person-ness whereever I can. This is not material to your point, but seems to have affected some features in how I responded in ways that were avoidable by either of us, and is an example of the sort of thing I think is an unnecessary overhead in iterative-disagreement-based communication.
But if that doesn’t justify the connection to you, then I don’t think I’m interpreting my intuitions into precise claims correctly, and so we won’t be able to do further evaluation of correctness of the intuitions I get from a difference I see between your post and that one, at least in this thread or until I can be more specific.
Alright. If you have further thoughts on the matter in the future, I’ll certainly be interested in reading them.
“of course” and “highly questionable” don’t hold in their syntax a statement of first person perspective
Correct, it is not a first person perspective, but rather a reference to an ongoing debate in philosophy (of course, you can guess which side of it I come down on…). This post originally having been a comment in a thread that mostly wasn’t about this specific point, I didn’t add any links to point a reader to references for this; I agree that this is a lacuna, and I’ll see about adding a clarifying link or two.
However, please note that the rest of the post does not depend on this point! Take a closer look at the paragraph beginning with “Let us consider again the belief”; as you see, the logic that I outline in the last part of the post is agnostic about whether beliefs are prior to decisions, or vice versa.
(You’re at least the second reader to find this unclear, which suggests that it could use a bit of an edit…)
(EDIT: Edited.)
a … request to update before processing the rest of the claims
asserting-before-justifying
Please see this old comment for my stance on this.
Embedded agency and beliefs also lead to some kind of embedded truth values, with all the strangeness of depending on possible values of deterministic things. Consider some complicated computation F that would compute a particular value F() such as 2 or 5, but we don't yet know which one. And consider a different computation G=F+10. Does the value G() of G depend on F()? Well, F() is some particular number, what does it mean to depend on it? And also we don't know the number. But there is still some sense in which G() depends on F(). Even though G() is also just some particular number. It doesn't make much sense to claim that 15 depends on 5, outside the context of the computations we are talking about.
Now, we can also consider truth value of a claim that G()>14. Does the truth value of that claim depend on F()? It seems that it does in some sense, in the same way that G() depends on F(), that is the claim G()>14 is true iff the claim F()>4 is true, and the truth value of F()>4 depends on the value of F(). Even though F() evaluates to some particular number such as 5, bringing about a different sense in which G() (which is just 15) or G()>14 (which is just true) don't depend on F().
Uh… this is interesting, but I’m not sure what it has to do with the post? Could you please spell out the connection a bit? What am I missing?
The complicated computation F could be some person making a decision F(), and the complicated computation G could be defining an outcome of that decision we are considering, so that G()>14 is a claim about that outcome with some truth value. If everything is deterministic, it might still make sense to say that G() depends on F(), and even that the truth of G()>14 depends on F(). And also that it's F that determines F(), and therefore that it's F that determines the truth value of G()>14.
(I think there is some equivocation about beliefs vs. decisions in the post, but it doesn't seem essential to the core puzzle it's bringing up. A decision is distinct from a belief about that decision, and if you are making decisions because of beliefs about those decisions, you run into Löbian traps, so it's not a good way of thinking about the role of beliefs about decisions.)
Hmm… well, I certainly agree that a decision is distinct from a belief about that decision (indeed, I explicitly argue against the opposing view), and that making decisions because of beliefs about those decisions is nonsensical.
I’m not sure about the rest of it. If you are making the assumption that everything is deterministic, it seems like that just gets you to @Shmi-style “decisions are not about changing the world, they are about learning what world you live in”, and (as I say in the post) that fully dissolves the “puzzle”.
For example, “beliefs are prior to decisions” is necessary in order for there to be any circularity, and yet it is, at best, a supremely dubious axiom.
This doesn’t seem that strange an assumption to hold? Pretty much all decisions I make happen because I believe something about the world. It’s hard to imagine what I would do if I believed nothing about the world, which seems the only way to make my decisions independent of my beliefs. I expect it would look like a seizure or coma.
It’s hard to imagine what I would do if I believed nothing about the world, which seems the only way to make my decisions independent of my beliefs. I expect it would look like a seizure or coma.
No, that’s not the only alternative, and certainly not anything like what I have in mind here. You seem to have understood that part of the post as saying something very odd—like perhaps that I was objecting to the notion that one must have any kind of beliefs at all, about anything, before one can make any kind of decisions at all? But of course nothing remotely like that is what I meant.
(As an aside before I continue: please note that the thesis of the post does not rest on this point. As I say in the post, we can grant that beliefs are prior to decisions, and yet the logic I describe still goes through.)
No, what I object to is “beliefs are prior to decisions” as an axiom, i.e. “beliefs are always prior to decisions” (in other words, the idea that beliefs cannot be a consequence of decisions). Put that way, the problem should be obvious: surely my belief that I will go to the beach this evening is an effect of my decision to go to the beach this evening, and not a cause of it! The reason why I believe that I will go to the beach this evening is that I’ve decided to do so.
This is nothing more than the perfectly ordinary, intuitive account of decision-making. I am not claiming anything weird here. On the contrary, I am saying that the converse—the idea that one first comes to believe that one will do something, and only then (and as a consequence) decides to do that thing (or perhaps that coming to believe that one will take some future action, and deciding to take that action, are actually one and the same)—is what’s unintuitive and unnatural. But that strange view is precisely the view that one must hold, to think that there’s some sort of circularity involved in the view that the truth of a belief like “I will go to the beach this evening” can be evaluated via the Tarski criterion. On the normal and intuitive view, there is no circularity at all. That’s all that I’m saying in that part.
What is the difference between this, and the quote above? Is merely the fact that “I will go to the beach this evening” is about the future,
No, it's also the fact that the future is taken to be unfixed. Future facts are knowable for a Laplace's Demon in a determined universe.
(Note that events that are undetermined because they depend on an agentive decision that hasn't been made yet are only a subset of undetermined events. You would also have difficulty fixing a belief about a random nuclear decay in the future)
And even free-will concerns evaporate if we adopt the perspective that decisions are not about changing the world, they are about learning what world you live i
Which is to say free will concerns "evaporate" if you assume determinism. This is not rationality. Rationality means believing the world is deterministic iff the world is deterministic ,believing it on the basis of a evidence...not assuming it to make problems go away.
Rationalists want as much as possible to be knowable, but believing things to be true because you want them to be is the essence of irrationality . Rationalists have to recognise that the nature of the world can constrain what is knowable.
(Indeterminism isn't the only problem area. There is also the indirectness of perception, which allows for simulation and other sceptical hypotheses -- you can't tell what is at the far end of a chain of a perceptual chain from the near end. And there is also the problem that a haphazardly evolved brain doesn't have any apriori guarantee to be able to.understand anything. And, quite possibly, an is-ought gap that prevents values being fixed by facts. ).
And even free-will concerns evaporate if we adopt the perspective that decisions are not about changing the world, they are about learning what world you live i
Which is to say free will concerns “evaporate” if you assume determinism. This is not rationality. Rationality means believing the world is deterministic iff the world is deterministic ,believing it on the basis of a evidence...not assuming it to make problems go away.
The point of the quoted line is not that we assume determinism in order to make free-will concerns go away—as you say, that would be obviously irrational.
Rather, the point is that there are other reasons to assume determinism. (Those reasons are described in the linked post, which is why I linked it.) If, for those other reasons, we do indeed adopt such a deterministic perspective—which, again, there are (or so the linked post claims!) good reasons to do—then we will find that free-will concerns have also evaporated.
(If those other reasons do not convince you, and you think that we shouldn’t adopt determinism, that is fine. You will note that this is not in any way a load-bearing assumption of the post’s argument.)
The linked post gives one reason , which is debatable, since it's been debated. You don't seem to have complete confidence in it yourself.
(This is a comment that has been turned into a post.)
The standard rationalist view is that beliefs ought properly to be determined by the facts, i.e. the belief “snow is white” is true iff snow is white.
Contrariwise, it is sometimes claimed (in the context of discussions about “postrationalism”) that:
even if you do have truth as the criterion for your beliefs, then this still leaves the truth value of a wide range of beliefs underdetermined
This is a broad claim, but here I will focus on one way in which such a thing allegedly happens:
… there are a wide variety of beliefs which are underdetermined by external reality. It’s not that you intentionally have fake beliefs which out of alignment with the world, it’s that some beliefs are to some extent self-fulfilling, and their truth value *just is *whatever you decide to believe in. If your deep-level alief is that “I am confident”, then you *will *be confident; if your deep-level alief is that “I am unconfident”, then you will be that.
Another way of putting it: what is the truth value of the belief “I will go to the beach this evening”? Well, if I go to the beach this evening, then it is true; if I don’t go to the beach this evening, it’s false. Its truth is determined by the actions of the agent, rather than the environment.
The question of whether this view is correct can be summarized as this post’s title puts it: are agent-action-dependent beliefs (i.e., an agent’s beliefs about what actions the agent will take in the future) underdetermined by physical reality (and therefore not amenable to evaluation by Tarski’s criterion)?
Scenarios like “I will go to the beach this evening” are quite commonplace, so we certainly have to grapple with them. At first blush, such a scenario seems like a challenge to the “truth as a basis for beliefs” view. Will I go to the beach this evening? Well, indeed—if I believe that I will, then I will, and if I don’t, then I won’t… how can I form an accurate belief, if its truth value is determined by whether I hold it?!
… is what someone might think, on a casual reading of the above quote. But that’s not quite what it says, is it? Here’s the relevant bit:
Another way of putting it: what is the truth value of the belief “I will go to the beach this evening”? Well, if I go to the beach this evening, then it is true; if I don’t go to the beach this evening, it’s false. Its truth is determined by the actions of the agent, rather than the environment.
[emphasis mine]
This seems significant, and yet:
“What is the truth value of the belief ‘snow is white’? Well, if snow is white, then it is true; if snow is not white, it’s false.”
What is the difference between this, and the quote above? Is merely the fact that “I will go to the beach this evening” is about the future, whereas “snow is white” is about the present? Are we saying that the problem is simply that the truth value of “I will go to the beach this evening” is as yet undetermined? Well, perhaps true enough, but then consider this:
“What is the truth value of the belief ‘it will rain this evening’? Well, if it rains this evening, then it is true; if it doesn’t rain this evening, it’s false.”
So this is about the future, and—like the belief about going to the beach—is, in some sense, “underdetermined by external reality” (at least, to the extent that the universe is subjectively non-deterministic). Of course, whether it rains this evening isn’t determined by the agent’s actions, but what difference does that make? Is the problem one of underdetermination, or agent-dependency? These are not the same problem!
Let’s return to my first example—“snow is white”—for a moment. Suppose that I hail from a tropical country, and have never seen snow (and have had no access to television, the internet, etc.). Is snow white? I have no idea. Now imagine that I am on a plane, which is taking me from my tropical homeland to, say, Murmansk, Russia. Once again, suppose I say:
“What is the truth value of the belief ‘snow is white’? Well, if snow is white, then it is true; if snow is not white, it’s false.”
For me (in this hypothetical scenario), there is no difference between this statement, and the one about it raining this evening. In both cases, there is some claim about reality. In both cases, I lack sufficient information to either accept the claim as true or reject it as false. In both cases, I expect that in just a few hours, I will acquire the relevant information (in the former case, my plane will touch down, and I will see snow for the first time, and observe it to be white, or not white; in the latter case, evening will come, and I will observe it raining, or not raining). And—in both cases—the truth of each respective belief will then come to be determined by external reality.
So the mere fact of some beliefs being “about the future” hardly justifies abandoning truth as a singular criterion for belief. As I’ve shown, there is little material difference between a belief that’s “about the future” and one that’s “about a part of the present concerning which we have insufficient information”. (And, by the way, we have perfectly familiar conceptual tools for dealing with such cases: subjective probability. What is the truth value of the belief “it will rain this evening”? But why have such beliefs? On Less Wrong, of all places, surely we know that it’s more proper to have beliefs that are more like “P(it will rain) = 0.25, P(it won’t rain) = 0.75”?)
So let’s set the underdetermination point aside. Might the question of agent-dependency trouble us more, and give us reason to question the solidity of truth as a basis for belief? Is there something significant to the fact that the truth value of the belief “I will go to the beach this evening” depends on my actions?
There is at least one (perhaps trivial) sense in which the answer is a firm “no”. So what if my actions determine whether this particular belief is true? My actions are part of reality, just like snow, just like rain. What makes them special?
Well—the one might say—what makes my actions special is that they depend on my decisions, which depend (somehow) on my beliefs. If I come to believe that I will go to the beach, then this either is identical to, or unavoidably causes, my deciding to go to the beach; and deciding to go to the beach causes me to take the action of going to the beach. Thus my belief determines its own truth! Obviously it can’t be determined by its truth, in that case—that would be hopelessly circular!
Of course any philosopher worth his salt will find much to quarrel with, in that highly questionable account of decision-making. For example, “beliefs are prior to decisions” is necessary in order for there to be any circularity, and yet it is, at best, a supremely dubious axiom. Note that reversing that priority makes the circularity go away, leaving us with a naturalistic account of agent-dependent beliefs; free-will concerns remain, but those are not epistemological in nature.
And even free-will concerns evaporate if we adopt the perspective that decisions are not about changing the world, they are about learning what world you live in. If we take this view, then we are simply done: we have brought “I will go to the beach this evening” in line with “it will rain this evening”, which we have already seen to be no different from “snow is white”. All are simply beliefs about reality. As the agent gains more information about reality, each of these beliefs might be revealed to be true, or not true.
Very well, but suppose an account (like shminux’s, described in the above link) that leaves no room at all for decision-making is too radical for us to stomach. Suppose we reject it. And let us further suppose that we remain agnostic about whether beliefs are prior to decisions, or vice-versa (i.e., we bracket the “circularity” objection). Is there, then, something special about agent-dependent beliefs?
Let us consider again the belief that “I will go to the beach this evening”. Suppose I come to hold this belief (which, depending on which parts of the above logic we find convincing, either brings about, or is the result of, my decision to go to the beach this evening.) But suppose that this afternoon, a tsunami washes away all the sand, and the beach is closed. Now my earlier belief has turned out to be false—through no actions or decisions on my part!
“Nitpicking!”, the one says. Of course unforeseen situations might change my plans. Anyway, what we really meant was something like “I will attempt to go to the beach this evening”. Surely, an agent’s attempt to take some action can fail; there is nothing significant about that!
But suppose that this afternoon, I come down with a cold. I no longer have any interest in beachgoing. Once again, my earlier belief has turned out to be false.
More nitpicking! What we really meant was “I will intend to go to the beach this evening, unless, of course, something happens that causes me to alter my plans.”
But suppose that evening comes, and I find that I just don’t feel like going to the beach, and I don’t. Nothing has happened to cause me to alter my plans, I just… don’t feel like it.
Bah! What we really meant was “I intend to go to the beach, and I will still intend it this evening, unless of course I don’t, for some reason, because surely I’m allowed to change my mind?”
But suppose that evening comes, and I find that not only do I not feel like going to the beach, I never really wanted to go to the beach in the first place. I thought I did, but now I realize I didn’t.
In summary:
There is nothing special about agent-action-dependent beliefs. They can turn out to be true. They can turn out to be false. That is all.