Betting on Logic

7gjm

4Sylvester Kollin

3Zane

2gjm

4Zane

1Sylvester Kollin

3Charlie Steiner

1Sylvester Kollin

2Charlie Steiner

1Sylvester Kollin

2Charlie Steiner

1ProgramCrafter

-1Robin Richtsfeld

New Comment

13 comments, sorted by Click to highlight new comments since: Today at 9:47 AM

I am very much not an expert on this. But: I don't see why bet 1 "subjunctively dominates" bet 2.

Suppose I'm currently planning to take bet 2, and suppose PA is able to prove that. Then I am expecting to get +1 from the bet.

Now, suppose we consider switching the output of my algorithm from "bet 2" to "bet 1". Then, counterfactually, PA will no longer prove that I take bet 2, so I now expect to be taking bet 1 in the not-P case, for an outcome of -1.

This is not better than the +1 I am currently expecting to get by taking bet 2.

What am I missing? (My best guess is that you reckon the comparison I should be doing isn't -1 after switching versus +1 before switching, but -1 after switching versus -10 from still taking bet 2 after changing the output of my algorithm, but if so then I don't understand why that would be the right comparison to make.)

To be sure, switching to Bet 1 is great *evidence *that is true (that's the whole point), but that's not the sort of reasoning FDT recommends. Rather, the question is if we take the Peano axioms to be downstream of the output of the algorithm in the relevant sense.

As the authors make clear, FDT is supposed to be "structurally similar" to CDT ^{[1]}, and in the same way CDT regards the history and the laws to be out of their control in Ahmed's problems, FDT should arguably regard the Peano axioms to be out of their control (i.e., "upstream" of the algorithm). What could be more upstream?

^{^}Levinstein and Soares write (page 2): "FDT is structurally similar to CDT, but it rectifies this mistake by recognizing that logical dependencies are decision-relevant as well.".

I think maybe we're running into the problem that FDT isn't (AIUI) really very precisely defined. But I think I agree with Zane's reply to your comment: two (apparently) possible worlds where my algorithm produces different decisions are also worlds where PA proves that it does (or at least they might be; PA can't prove everything that's true) because those are worlds where I'm running different algorithms. And unless I'm confused (which I very much might be) that's much of the *point* of FDT: we recognize different decisions as being consequences of running different algorithms.

I would think that FDT chooses Bet 2, unless I'm misunderstanding something about the role of Peano Arithmetic here. Taking Bet 2 results in P being true, and vice versa for Bet 1; therefore, the only options that are actually possible are the bottom left and the top right.

In fact, this seems like the exact sort of situation in which FDT can be easily shown to outperform CDT. CDT would reason along the lines of "Bet 1 is better if P is true, and better if P is false, and therefore better overall" without paying attention to the direct dependency between the output of your decision algorithm and the truth value of P.

I'm not quite sure what Yudkowsky and Soares meant by "dominance" there. I'd guess on priors that they meant FDT pays attention to those dependencies when deciding whether one strategy outperforms another... but yeah, they kind of worded it in a way that suggests the opposite interpretation.

I generally think of FDT as taking a causal model of the world and augmenting it with "logical nodes" (that have to be placed in a common-sense, non-systematic way, which is an issue with FDT). Whether or not some FDT agent regards "bet on 1 while PA proves I pick 2" as an option depends on how you've set up the logical nodes in your augmented model.

If the agent evaluates actions by pretending to control a logical node that's upstream of both its own action and PA proofs about its action (which is pretty reasonable), then "bet on 1 while PA proves I pick 2" is not a counterfactual it ever considers, and FDT picks 2.

Right, but it's fairly clear to me that this is *not *what the authors have in mind. For example, they cite Bjerring (2014), who proposes very specific and precise extensions of the Lewis-Stalnaker semantics.

It's fairly clear to me that the authors do not have any specific and precise method in mind, Bjerring or no Bjerring.

From the paper:

While we do not yet have a satisfying account of how to perform counterpossible reasoning in practice, the human brain shows that reasonable heuristics exist.

Unfortunately, it’s not clear how to define a

trueoperator.

In fact, any agent-independent rule for construction of counterpossibles is doomed, because different questions can cause the same mathematical change to produce different imagined results. What mathematical propositions get chosen to be "upstream" or "downstream" has to depend on what you're thinking of as "doing the changing" or "doing the reacting" for the question at hand.

This is important both normatively (e.g. if you were somehow designing an AI that used FDT), and also to understand how humans reason about thought experiments - by constructing the counterfactuals *in response* to the proposed thought experiment.

It's fairly clear to me that the authors do not have any specific and precise method in mind, Bjerring or no Bjerring.

Of course they don't have a specific proposal in the paper. I'm just saying that it seems like they would want to be more precise, or that a full specification requires more work on counterpossibles (which you seem to be arguing against). From the abstract:

While not necessary for considering classic decision theory problems, we note that a full specification of FDT will require a non-trivial theory of logical counterfactuals and algorithmic similarity.

...

What mathematical propositions get chosen to be "upstream" or "downstream" has to depend on what you're thinking of as "doing the changing" or "doing the reacting" for the question at hand.

If this is in fact how we should think about FDT, the theory becomes very uninteresting since it seems like you can then just get whatever recommendations you want from it.

If this is in fact how we should think about FDT, the theory becomes very uninteresting since it seems like you can then just get whatever recommendations you want from it.

Well, just because something is vague and relies on common sense, doesn't mean you can get whatever answer you want from it.

And there's still plenty of progress to be made in formalizing FDT - it's just that a formalization of an FDT agent isn't going to reference some agent-independent way of computing counterpossibles. Instead it's going to have to contain standards for how best to compute counterpossibles on the fly in response to the needs of the moment.

This is pretty equivalent to the original Newcomb problem (https://www.lesswrong.com/posts/6ddcsdA2c2XpNpE5x/newcomb-s-problem-and-regret-of-rationality).

FDT is not false, it's just not applicable as it has precondition that the agent's reasoning process and decision do not influence the probability balance between P and not-P.

A material conditional P --> Q is true unless P is true and Q is false.

The proposition P --> Q can be true even if P is false (Q must be false).

The proposition P --> Q can be false even if P is true (Q may be true or false).

You may assume the Peano Axioms to be true or to be false, there is no right or wrong.

Consider the following decision problem inspired by Ahmed (2013, 2014).

What does functional decision theory (Yudkowsky and Soares [2018], Levinstein and Soares [2020]), FDT, recommend? It seems like taking Bet 1 "subjunctively dominates"

^{[1]}taking Bet 2, so FDT recommends taking Bet 1.But one should take Bet 2, so FDT is false.

^{^}Yudkowsky and Soares (2018) write: