Cross-posted from my blog. Related to my several previous posts.

(Epistemic status: I have no idea why such an obvious observation is never even mentioned by the decision theorists. Or maybe it is, I have not seen it.)

A logical counterfactual, as described by Nate Soares:

In a setting with deterministic algorithmic agents, how (in general) should we evaluate the expected value of the hypothetical possible worlds in which the agent’s decision algorithm takes on different outputs, given that all but one of those worlds is logically impossible?

So we know that something happened in the past, but want to consider something else having happened instead under the same circumstances. Like a ubiquitous and familiar to everyone lament of a decision made: “I knew better than doing X and should have done Y instead”. It feels like there was a choice, and one could have made a different choice even while being the same person under the same circumstances.

Of course, a “deterministic algorithmic agent” would make the same decisions in the same circumstances, so what one really asks is “what kind of a [small] difference in the agent’s algorithm, and/or what kind of [minimally] different inputs into the algorithm’s decision-making process would result in a different output?” When phrased in this way, we are describing different “hypothetical possible worlds”. Some of these worlds correspond to different algorithms, and some to different inputs to the same algorithm, and in this way they are just as “logically possible” as the world where the agent we observed took an action we observed.

Why does it feel like there was both a logical possibility and impossibility, “the agent’s decision algorithm takes on different outputs”? Because the world we see is in low resolution! Like when you see a road from high up, it looks like a single lane:

But when you zoom in, you see something looking like this, with a maze of lanes, ramps and overpasses:

So how does one answer Nate’s question? (Which was not really a question, but maybe should have been) Zoom In! See what the agent’s algorithm is like, what hardware it runs on, what unseen-from-high-up inputs can cause it to switch lanes and take a different turn. So that the worlds which look counterfactual from high up (e.g. a car could have turned left, even though it has turned right) becomes physically unlikely when zoomed in (e.g. the car was in the right lane with no other way to go but right, short of jumping a divider). Treating the apparent logical counterfactuals as any other physical uncertainty seems like a more promising approach than inventing some special treatment for what appears an impossible possible world.

New to LessWrong?

New Comment
14 comments, sorted by Click to highlight new comments since: Today at 11:37 AM

I think this is somewhat similar to the argument I made in this post that logical counterfactuals are more about your state of knowledge than about the state of the universe. So if you had a complete model of the universe including yourself, you wouldn't need logical counterfactuals to know what to do, as you'd know that there is only one thing you could do (zoomed in view). However, if you have less knowledge, say you just know that the other agent is a clone of you and that you're playing a prisoner's dilemma game, you won't know what they will do (zoomed out view). Well, at least until we add in your decision theory.

[-]TAG6y20

Purely logical counteractuals are based on incomplete knowledge, and since incomplete knowledge is nearly ubiquitous, there is no doubt about the existence of logical counterfactuals. The contentious issue is the existence of real counterfactuals.

There are three main sources of incomplete information:

  1. Inability to obtain complete objective information about a microstate. This can be caused by some aspect of physics, such as the Heisenberg uncertainty principle (not the same thing as indeterminsim).

  2. Inability to to obtain objective information about a state. This can be caused by the observer being causally involved in the system in question , so that it has to include its own actions in its predictions. The ultimate case is an agent trying to predict itself, and hitting Loebian obstacles.

  3. Inability to predict the future evolution of a system even given compete objective information about the starting states. (ie problems 1 and 2 don't occur). This can be caused by casual indeterminism (not the same thing as the uncertainty mentioned in 1).

Logical counterfactuals are useful even if reality is knowable and deterministic, so that there are no real counterfactuals. Real counterfactuals can exist because of inconvenient features of the way physics works, and can't be refuted by an armchair argument. In particular, pointing out that some counterfactuals are based on limited information does not prove that they all are.

It's a good summary. Yes, the impossibility of observing a QM state without "collapsing" it (whatever the source of the apparent or actual collapse, be it MWI or gravity or whatever) limits the deterministic predictions, though not the probabilistic ones. Hence I followed Nate's qualification of a "deterministic algorithmic agent".

Yes, accurate self-prediction is an issue, in part because it results in self-modification. No, there is no such thing as causal indeterminism (an effect without a cause) except as a mind projection of incomplete information on the world. Unless I misunderstand what you mean by this term. Physics does not support this notion. That is precisely what I am stating in my post.

The whole notion of a counterfactual is a misapplication of incomplete knowledge, treating a low resolution map as the territory. There is no qualitative difference between the difficulty computing how a thrown die lands and the difficulty computing some distant digit of the decimal expansion of pi. The tools are different, though. In the former case one might need high-definition cameras with some high-performance physics modeling hardware and software, while in the latter... well, also some hardware and software.

Maybe the idea of a counterfactual is a useful abstraction in some cases, though I wonder if other, less misleading abstractions would do the job just as well.

[-]TAG6y10

No, there is no such thing as causal indeterminism (an effect without a cause) except as a mind projection of incomplete information on the world.

That is not a fact, and cannot be established as a fact by armchair arguments, including arguments about "mind projection".

That argument from mind projection only establishes that it is possible for counterfactuals to exist in the mind, not that they cannot exist in reality as well. Moreover, it is countered by another argument from mind projection: that physical laws have no real existence, and are just bookkeeping devices invented by humans. But how can you have real, out-there determinism without real out-there laws? If you want an asymmetric conclusion, that determinism is true and indeterminism false, you need an argument that applies to some things and not others, whereas mind-projection arguments are more of a universal solvent.

Physics does not support this notion.

Who told you that? When I studied physics, I was told that QM was based on real indeterminism. (Admitedly, it turns out that on further investigation things are somewhat more complex... )

To the best of my understanding, the only non-deterministic part of QM is collapse. Whether this is considered to be "real" indeterminism depends on the interpretation - Strong Copenhagen says yes, Bayesian might be agnostic, Many Worlds says it's only indexical...

[-]TAG5y-10

So?

Yes, it is similar, and your state of knowledge is a (small) part of the state of the Universe, unless you are a Cartesian dualist. Indeed we don't have "a complete model of the universe including yourself", and learning the agent's decision theory, among other things, is a useful way toward understanding the agent's actions. There is nothing logically counterfactual about it.

If you know that the other (deterministic or probabilistic) agent is a clone of you with exactly the same set of inputs as you do, you know that they will do exactly the same thing you do (potentially flipping a coin the same way you do, though possibly with a different outcome if they are probabilistic). There is no known magic in the universe that would allow for to anything else. Unless they are inexact clone, of course, because in the zoomed out view you don't see the small differences that can lead to very different outcomes.

[-]TAG6y10

If you know that the other (deterministic or probabilistic) agent is a clone of you with exactly the same set of inputs as you do, you know that they will do exactly the same thing you do (potentially flipping a coin the same way you do, though possibly with a different outcome if they are probabilistic)

That isn't literally true of a probabilistic agent: you can't use a coin to predict another coin. It is sort-of true that you find a similar statistical pattern....but that is rather beside the point of counterfactuals: if anything is probabilistic (really and not as a result of limited information), it is indeterministic, and if anything is indeterministic, then then it might have happened in another way, and there is your (real, not logical) counterfactual.

The different input is just as impossible as the different output. Humans implement counterfactual reasoning without zooming in, so we should look for math that allows this.

Why would different inputs be impossible? They can certainly be invisible at low-res, but it doesn't make them impossible.

I understand that quote by Nate Soares to mean that other outputs of algorithms are impossible because in a deterministic universe, everything can only possibly have happened the way it did.

I don't know how shminux interprets the words, and if the question was related to this, but there is an issue in your use of "impossible". Things that happen in this world are actual, and things that happen in alternative worlds are possible. (The set of alternative worlds needs to be defined for each question.) An impossible situation is one that can't occur, that doesn't happen in any of the alternative worlds.

Thus outputs of an algorithm different from the actual output are not actual, and furthermore not possible, as they don't occur in the alternative worlds with the same algorithm receiving the same inputs. But there are alternative worlds (whose set is different than in the previous sentence, no longer constrained by the state of input) with the same algorithm that receives different inputs, and so despite not being actual, the different inputs are still possible, in other words not impossible.

I agree with everything you said, it seems like common sense. I'm going by the quote above, and by the similar sentiment in multiple MIRI papers, that there are "logically impossible worlds" described by the counterfactuals like "same agent, same input, different outcome". I am lost as to why this terminology is useful at all. Again, odds are, I am missing something here.

An input to another algorithm may be our source code, with the other algorithm's output depending on what they can prove about our output. If we assume they reason consistently, and want to prove something about their output, we might assume what they prove about us even when that later turns out impossible.