I think my core issue with the above is the nature of the specification of the problem of "replacing in ". Allowing passing to an arbitrary equivalent program before replacing exact instances of seems overly permissive, and to allow in exactly the kind of principle-of-explosion issue that logical counterfactuals have. Suppose for instance that and both halt with a defined value, say . As framed above, I can take to be a program that computes (for some ), where is the result of substituting all exact instances of in w

...34y

Let's say the world is a grid of Game of Life cells evolving in discrete time.
What counts as a "subprogram"?
Let's further say that world contains embedded computers that run programs. Will
your notion of "subprogram", specified purely in terms of Game of Life cells, be
strong enough to capture subprograms of these programs?

I agree that that is a problem that both this approach to counterfactuals and the FDT logical-counterfactual approach share. The particular problem I was hoping this approach avoids was the one of assuming mutually-exclusive logical facts, such that all-but-one of them must necessarily be false, and the implications this has for the agent's consistency and reasoning about its actions. Are you saying that they are the same problem, that the second problem is comparatively not worth solving, or something else?

44y

I'm saying they are the same problem. These sentences are logically equivalent:
"A() = 1 provably implies B() = 2"
"There's a program C provably equivalent to B such that, if we replace all exact
instances of A inside C with 1, the resulting program will return 2"

I have indeed read many of those posts already (though I appreciate some reference to them in the original post would have been sensible, I apologise). Chris_Leong's Deconfusing Logical Counterfactuals comes pretty close to this - the counterfactual model I'm interested in corresponds to their notion of "Raw Counterfactual", but AFAICT they're going in a somewhat different direction with the notion of "erasure" (I don't think it should be necessary to forget that you've seen a full box in the transparent variant of Newcomb's problem, if you explicitly cons

...14y

I suspect the real underlying issue is that of free will: all decision theories
assume we can make different decisions in EXACT SAME circumstances, whereas from
what we understand about the physical world, there is no such thing, and the
only non-dualist proposal on the table is that of Scott Aaronson's freebits. I
have written a related post last year. We certainly do have a very realistic
illusion of free will, to the degree where any argument to the contrary tends to
be rejected, ignored, strawmanned or misinterpreted. If you read through the
philosophical writings on compatibilism, people keep talking past each other all
the time, never getting to the crux of their disagreement. Not that it (or
anything else) matters in the universe where there is no freedom of choice,
anyway.

Ah, got there. From , we get specifically and thus . But we have directly as a theorem (axiom?) about the behaviour of , and we can lift this to , so also and thus .

4y7

I'm having difficulty following the line of the proof beginning "so, either way, PA is inconsistent". We have and , which together imply that , but I'm not immediately seeing how this leads to ?

54y

Ah, got there. From □(A≠cross), we get specifically □(A=cross⇒U=10) and thus
□□(A=cross⇒U=10). But we have □(A=cross⇒U=10)⇒A=cross directly as a theorem
(axiom?) about the behaviour of A, and we can lift this to
□□(A=cross⇒U=10)⇒□(A=cross), so □(A=cross) also and thus □⊥.

9y5

In fact, all you know is that your credence of event H is somewhere in the interval [0.4, 0.6]

This really isn't how I understand credences to work. Firstly, they don't take ranges, and secondly, they aren't dictated to me by the background information, they're calculated from it. This isn't immediately fatal, because you can say something like:

The coin was flipped one quintillion times, and the proportion of times it came up heads was A, where A lies in the range [0.4, 0.6]

This is something you could actually tell me, and would have the effect that ...

Again, I agree that the problem of identifying what logical structures (whereever they occur) count as implementing a particular function is a deep and interesting one, and not one that I am claiming to have solved. But again, I do not agree that it is a problem I have introduced? An FDT agent correctly inferring the downstream causal results of setting FDT(P––,G––)=a would, in general, have to identify FDT(P––,G––) being computed inside a Game of Life simulation, if and where such a calculation so occured.

While I am indeed interested in exploring the answ

... (read more)