Sylvester Kollin

Comments

Ah, okay, got it. Sorry about the confusion. That description seems right to me, fwiw.

Thanks for clarifying. I still don't think this is exactly what people usually mean by ECL, but perhaps it's not super important what words we use. (I think the issue is that your model of the acausal interaction—i.e. a PD with survival on the line—is different to the toy model of ECL I have in my head where cooperation consists in benefitting the values of the other player [without regard for their life per se]. As I understand it, this is essentially the principal model used in the original ECL paper as well.)

The effective correlation is likely to be (much) larger for someone using UDT.

Could you say more about why you think this? (Or, have you written about this somewhere else?) I think I agree if by "UDT" you mean something like "EDT + updatelessness"[1]; but if you are essentially equating UDT with FDT, I would expect the "correlation"/"logi-causal effect on your opponent" to be pretty minor in practice due to the apparent brittleness of "logical causation".

Correlation and kindness also have an important nonlinear interaction, which is often discussed under the heading of “evidential cooperation in large worlds” or ECL.

This is not how I would characterize ECL. Rather, ECL is about correlation + caring about what happens in your opponent's universe, i.e. not specifically about the welfare/life of your opponent.

  1. ^

    Because updatelessness can arguably increase the game-theoretic symmetry of many kinds of interactions, which is exactly what is needed to get EDT to cooperate.

Related: A bargaining-theoretic approach to moral uncertainty by Greaves and Cotton-Barratt. Section 6 is especially interesting where they highlight a problem with the Nash approach; namely that the NBS is variant to whether (sub-)agents are bargaining over all decision problems (which they are currently facing and think they will face with nonzero probability) simultaneously, or whether all bargaining problems are treated separately and you find the solution for each individual problem—one at a time.

In the 'grand-world' model, (sub-)agents can bargain across situations with differing stakes and prima facie reach mutually beneficial compromises, but it's not very practical (as the authors note) and would perhaps depend too much on the priors in question (just as with updatelessness). In the 'small-world' model, on the other hand, you don't have problems of impracticality and so on, but you will miss out on a lot of compromises. 
 

Now, let's pretend you are an egalitarian. You still want to satisfy everyone's goals, and so you go behind the veil of ignorance, and forget who you are. The difference is that now you are not trying to maximize expected expected utility, and instead are trying to maximize worst-case expected utility.

Nitpick: I think this is a somewhat controversial and nonstandard definition of egalitarianism. Rather, this is the decision theory underlying Rawls' 'justice as fairness'; and, yes, Rawls claimed that his theory was egalitarian (if I remember correctly), but this has come under much scrutiny. See Egalitarianism against the Veil of Ignorance by Roemer, for example.

I agree that the latter two examples have Moorean vibes, but I don't think they strictly speaking can be classified as such (especially the last one). (Perhaps you are not saying this?) They could just be understood as instances of modus tollens, where the irrationality is not that they recognize that their belief has a non-epistemic generator, but rather that they have an absurdly high credence in , i.e. "my parents wouldn't be wrong" and "philosophers could/should not be out of jobs".

The same holds if Alice is confident in Bob's relevant conditional behavior for some other reason, but can't literally view Bob's source code. Alice evaluates counterfactuals based on "how would Bob behave if I do X? what about if I do Y?", since those are the differences that can affect utility; knowing the details of Bob's algorithm doesn't matter if those details are screened off by Bob's functional behavior.

Hm. What kind of dependence is involved here? Doesn't seem like a case of subjunctive dependence as defined in the FDT papers; the two algorithms are not related in any way beyond that they happen to be correlated.

Alice evaluates counterfactuals based on "how would Bob behave if I do X? what about if I do Y?", since those are the differences that can affect utility...

Sure, but so do all agents that subscribe to suppositional decision theories. The whole DT debate is about what that means.

I'm not claiming this (again, it's about relative not absolute likelihood).

I'm confused. I was comparing the likelihood of (3) to the likelihood of (1) and (2); i.e. saying something about relative likelihood, no?

I'm not saying this is likely, just that this is the most plausible path I see by which UDT leads to nice things for us.

I meant for my main argument to be directed at the claim of relative likelihood; sorry if that was not clear. So I guess my question is: do you think the updatelessness-based trade you described is the most plausible type of acausal trade out of the three that I listed? As said, ECL and simulation-based trade arguably require much fewer assumptions about decision theory. To get ECL off the ground, for example, you arguably just need your decision theory to cooperate in the Twin PD, and many theories satisfy this criterion. 

(And the topic of this post is how decision theory leads us to have nice things, not UDT specifically. Or at least I think it should be; I don't think one ought to be so confident that UDT/FDT is clearly the "correct" theory  [not saying this is what you believe], especially given how underdeveloped it is compared to the alternatives.)

I had something like the following in mind: you are playing the PD against someone implementing "AlienDT" which you know nothing about except that (i) it's a completely different algorithm to the one you are implementing, and (ii) that it nonetheless outputs the same action/policy as the algorithm you are implementing with some high probability (say 0.9), in a given decision problem.

It seems to me that you should definitely cooperate in this case, but I have no idea how logi-causalist decision theories are supposed to arrive at that conclusion (if at all).

What's your take on playing a PD against someone who is implementing a different decision algorithm to the one you are implementing, albeit strongly (logically) correlated in terms of outputs?

Load More