But clearly you still made your final decision between 10 and 40 miles only when you were at Alderford. Not hours before that. Our past selves can't simply force us to do certain things, the memory of a past "commitment" is only one factor that may influence our present decision making, but it doesn't replace a decision. Otherwise, always when we "decide" to definitely do an unpleasant task tomorrow rather than today ("I do the dishes tomorrow, I swear!"), we would then tomorrow in fact always follow through with it, which isn't at all the case. (The Kavka/Newcomb cases are even worse than this, because there it isn't just irrational akrasia preventing us from executing past "commitments", but instrumental rationality itself, at least if we believe that CDT captures instrumental rationality.)
A more general remark, somewhat related to reflexivity (reflectivity?): In the Where Luce and Krantz paper, Spohn also criticizes Jeffrey for allowing the assignment of probabilities to acts, because for Jeffrey, everything (acts, outcomes, states) is a proposition. And any boolean combination of propositions is a proposition. In his framework, any proposition can be assigned a probability and a utility. But I'm pretty sure Jeffrey's theory doesn't strictly require that act probabilities are defined. Moreover, even if they are defined, it doesn't require them for decisions. That is, for outcomes O and an action A, to calculate the utility he only requires probabilities of the form , which we can treat as a basic probability instead of, frequentist style, a mere abbreviation for the ratio formula . So and can be undefined. In his theory is a theorem. I'm planning a post on explaining Jeffrey's theory because I think it is way underappreciated. It's a general theory of utility, rather than just a decision theory which is restricted to "acts" and "outcomes". To be fair, I don't know whether that would really help much with elucidating reflectivity. The lesson would probably be something like "according to Jeffrey's theory you can have prior probabilities for present acts but you should ignore them when making decisions". The interesting part is that his theory can't be simply dismissed because others aren't as general and thus are not a full replacement.
A precisely formulated limitation is needed that will rule out the intention-detecting machine while allowing the sorts of self-knowledge that people observably use.
Maybe the first question is then what form of "self-knowledge" people do, in fact, observably use. I think we treat memories of past "commitments"/intentions more like non-binding recommendations from a close friend (our "past self"), which we may very well just ignore. Maybe there is an ideal rule of rationality that we should always adhere to our past commitments, at least if we learn no new information. But I'd say "should" implies "can", so by contraposition, "not can" implies "not should". Which would mean if precommitment is not possible for an agent it's not required by rationality.
Spohn shows that you can draw causal graphs such that CDT can get rewards in both cases, though only under the assumption that true precommitment is possible. But Spohn doesn't give arguments for the possibility of precommitment, as far as I can tell.
His solution seems to rely on the ability to precommit to a future action, such that the future action can be treated like an ordinary outcome:
It is obvious that in the situation thus presented one-boxing is rational. If my decision determines or strongly influences the prediction, then I rationally decide to one-box, and when standing before the boxes I just do this. (p. 101f)
If people can just "make decisions early", then one-boxing is, of course, the rational thing to do from the point of CDT. It effectively means you are no longer deciding anything when you are standing in front of the two boxes, you are just slavishly one-boxing as if under hypnotic suggestion, or as if being somehow forced to one-box by your earlier self. Then the "decision" or "act" here can be assigned a probability because it is assumed there is nothing left to decide, it's effectively just an consequence of the real decision that was made much earlier, consistent with the view that an action in a decision situation may not be assigned a probability.
The real problem with the precommitment route is that it assumes the possibility of "precommitment". Yet in reality, if you "commit" early to some action, and you are later faced with the situation where the action has to be executed, you are still left with the question of whether or not you should "follow through" with your commitment. Which just means your precommitment wasn't real. You can't make decisions in advance, you can't simply force your later self to do things. The actual decision always has to be made in the present, and the supposed "precommitment" of your past self is nothing more than a suggestion.
(The impossibility of precommitment was illustrated in Kavka's toxin puzzle.)
It's worth repeating Spohn's arguments from Where Luce and Krantz Do Really Generalize Savage's Decision Model:
Now, probably anyone will find it absurd to assume that someone has subjective probabilities for things which are under his control and which he can actualize as he pleases. I think this feeling of absurdity can be converted into more serious arguments for our principle:
First, probabilities for acts play no role in decision making. For, what only matters in a decision situation is how much the decision maker likes the various acts available to him, and relevant to this, in turn, is what he believes to result from the various acts and how much he likes these results. At no place does there enter any subjective probability for an act. The decision maker chooses the act he likes most - be its probability as it may. But if this is so, there is no sense in imputing probabilities for acts to the decision maker. For one could tell neither from his actual choices nor from his preferences what they are. Now, decision models are designed to capture just the decision maker's cognitive and motivational dispositions expressed by subjective probabilities and utilities which manifest themselves in and can be guessed from his choices and preferences. Probabilities for acts, if they exist at all, are not of this sort, as just seen, and should therefore not be contained in decision models.
The strangeness of probabilities for acts can also be brought out by a more concrete argument: It is generally acknowledged that subjective probabilities manifest themselves in the readiness to accept bets with appropriate betting odds and small stakes. Hence, a probability for an act should manifest itself in the readiness to accept a bet on that act, if the betting odds are high enough. Of course, this is not the case. The agent's readiness to accept a bet on an act does not depend on the betting odds, but only on his gain. If the gain is high enough to put this act on the top of his preference order of acts, he will accept it, and if not, not. The stake of the agent is of no relevance whatsoever.
One might object that we often do speak of probabilities for acts. For instance, I might say: "It's very unlikely that I shall wear my shorts outdoors next winter." But I do not think that such an utterance expresses a genuine probability for an act; rather I would construe this utterance as expressing that I find it very unlikely to get into a decision situation next winter in which it would be best to wear my shorts outdoors, i.e. that I find it very unlikely that it will be warmer than 20°C next winter, that someone will offer me DM 1000.- for wearing shorts outdoors, or that fashion suddenly will prescribe wearing shorts, etc. Besides, it is characteristic of such utterances that they refer only to acts which one has not yet to decide upon. As soon as I have to make up my mind whether to wear my shorts outdoors or not, my utterance is out of place.
I'm not the one addressed here, but the term "rational" can be replaced with "useful" here. The argument is that there is a difference between the questions "what is the most useful thing to do in this given situation?" and "what is the most generally useful decision algorithm, and what would it recommend in this situation?" Usefulness is a causal concept, but it is applied to different things here (actions versus decision algorithms that cause actions). CDT answers the first question, MIRI-style decision theories answer something similar to the second question.
What people like me claim is that the answer to the first question may be different from the second. E.g. for Newcomb's problem, where having a decision algorithm that, in certain situations, picks non-useful actions, can be useful. Like when an entity can read your decision algorithm and predict what action you would pick, and then change its rewards based on that.
In fact, that the answers to the above questions can diverge was already discussed by Thomas Schelling and Derek Parfit. See Parfit in Reasons and Persons:
Consider Schelling's Answer to Armed Robbery. A man breaks into my house. He hears me calling the police. But, since the nearest town is far away, the police cannot arrive in less then fifteen minutes. The man orders me to open the safe in which I hoard my gold. He threatens that, unless he gets the gold in the next five minutes, he will start shooting my children, one by one.
What is it rational for me to do? I need the answer fast. I realize that it would not be rational to give this man the gold. The man knows that, if he simply takes the gold, either I or my children could tell the police the make and number of the car in which he drives away. So there is a great risk that, if he gets the gold, he will kill me and my children before he drives away.
Since it would be irrational to give this man the gold, should I ignore his threat? This would also be irrational. There is a great risk that he will kill one of my children, to make me believe his threat that, unless he gets the gold, he will kill my other children.
What should I do? It is very likely that, whether or not I give this man the gold, he will kill us all. I am in a desperate position. Fortunately, I remember reading Schelling's The Strategy of Conflict.
I also have a special drug, conveniently at hand. This drug causes one to be, for a brief period, very irrational. I reach for the bottle and drink a mouthful before the man can stop me. Within a few seconds, it becomes apparent that I am crazy. Reeling about the room, I say to the man: 'Go ahead. I love my children. So please kill them.' The man tries to get the gold by torturing me. I cry out: 'This is agony. So please go on.'
Given the state that I am in, the man is now powerless. He can do nothing that will induce me to open the safe. Threats and torture cannot force concessions from someone who is so irrational. The man can only flee, hoping to escape the police. And, since I am in this state, the man is less likely to believe that I would record the number on his car. He therefore has less reason to kill me.
While I am in this state, I shall act in ways that are very irrational. There is a risk that, before the police arrive, I may harm myself or my children. But, since I have no gun, this risk is small. And making myself irrational is the best way to reduce the great risk that this man will kill us all.
On any plausible theory about rationality, it would be rational for me, in this case, to cause myself to become for a period very irrational.
That seems to suggests we should play it safe and avoid eugenics. But doing nothing rather than something may well be much worse than what the blind idiot God does.
Movies in general require very little attention compared to just listening or even reading. Since they are audiovisual, they leave very little to the imagination. So I expect other influences have a larger effect.
Yes, but I was taking about humans. An AI might have a precommitment ability.