Sylvester Kollin

Comments

Regarding Sleeping Counterfact: there seems to be two updates you could make, and thus there should be conceptual space for two interesting ways of being updatelessness in this problem; you could be 'anthropically updateless', i.e., not update on your existence in the standard Thirder way, and you could also be updateless with respect to the researchers asking for money (just as in counterfactual mugging). And it seems like these two variants will make different recommendations.

Suppose you make the first update, but not the second. Then the evidentialist value of paying up would plausibly be .

Suppose, on the other hand, that you are updateless with respect to both facts. Then the evidentialist value of paying up would be .

Interesting! Did thinking about those variants make you update your credences in SIA/SSA (or else)?

No, not really! This was mostly just for fun.

My follow-up question for almost all of them though, is based on use of the word "should" in the question. Since it presumably is not any moral version of "should", it's presumably a meaning in the direction of "best achieves a desired outcome".

The 'should' only designates what you think epistemic rationality requires of you in the situation. That might be something consequentialist (which is what I think you mean by "best achieves a desired outcome"), like maximizing accuracy[1], but it need not be; you could think there are other norms[2]

To see why epistemic consequentialism might not be the whole story, consider the following case from Greaves (2013) where the agent seemingly maximises accuracy by ignoring evidence and believing an obviously false thing.

Imps. Emily is taking a walk through the Garden of Epistemic Imps. A child plays on the grass in front of her. In a nearby summerhouse are n further children, each of whom may or may not come out to play in a minute. They are able to read Emily ’s mind, and their algorithm for deciding whether to play outdoors is as follows. If she forms degree of belief 0 that there is now a child before her, they will come out to play. If she forms degree of belief 1 that there is a child before her, they will roll a fair die, and come out to play iff the outcome is an even number. More generally, the summerhouse children will play with chance , where is the degree of belief Emily adopts in the proposition  that there is now a child before her. Emily ’s epistemic decision is the choice of credences in the proposition  that there is now a child before her, and, for each , the proposition  that the th summerhouse child will be outdoors in a few minutes’ time.

See Konek and Levinstein (2019) for a good discussion, though.

If I give the same answer twice based on the same information, is that scored differently from giving that answer once?

Once again, this depends on your preferred view of epistemic rationality, and specifically how you want to formulate the accuracy-first perspective. Whether you want to maximize individual, average or total accuracy is up to you! The problems formulated here are supposed to be agnostic with regard to such things; indeed, these are the types of discussions one wants to motivate by formulating philosophical dilemmas.

  1. ^

    This is plausibly cashed out by tying your epistemic utility function to a proper scoring rule, e.g. the Brier score.

  2. ^

    See e.g. Sylvan (2020) for a discussion of what non-consequentialism might look like in the general, non-anthropic, case.

Ah, okay, got it. Sorry about the confusion. That description seems right to me, fwiw.

Thanks for clarifying. I still don't think this is exactly what people usually mean by ECL, but perhaps it's not super important what words we use. (I think the issue is that your model of the acausal interaction—i.e. a PD with survival on the line—is different to the toy model of ECL I have in my head where cooperation consists in benefitting the values of the other player [without regard for their life per se]. As I understand it, this is essentially the principal model used in the original ECL paper as well.)

The effective correlation is likely to be (much) larger for someone using UDT.

Could you say more about why you think this? (Or, have you written about this somewhere else?) I think I agree if by "UDT" you mean something like "EDT + updatelessness"[1]; but if you are essentially equating UDT with FDT, I would expect the "correlation"/"logi-causal effect on your opponent" to be pretty minor in practice due to the apparent brittleness of "logical causation".

Correlation and kindness also have an important nonlinear interaction, which is often discussed under the heading of “evidential cooperation in large worlds” or ECL.

This is not how I would characterize ECL. Rather, ECL is about correlation + caring about what happens in your opponent's universe, i.e. not specifically about the welfare/life of your opponent.

  1. ^

    Because updatelessness can arguably increase the game-theoretic symmetry of many kinds of interactions, which is exactly what is needed to get EDT to cooperate.

Related: A bargaining-theoretic approach to moral uncertainty by Greaves and Cotton-Barratt. Section 6 is especially interesting where they highlight a problem with the Nash approach; namely that the NBS is variant to whether (sub-)agents are bargaining over all decision problems (which they are currently facing and think they will face with nonzero probability) simultaneously, or whether all bargaining problems are treated separately and you find the solution for each individual problem—one at a time.

In the 'grand-world' model, (sub-)agents can bargain across situations with differing stakes and prima facie reach mutually beneficial compromises, but it's not very practical (as the authors note) and would perhaps depend too much on the priors in question (just as with updatelessness). In the 'small-world' model, on the other hand, you don't have problems of impracticality and so on, but you will miss out on a lot of compromises. 
 

Now, let's pretend you are an egalitarian. You still want to satisfy everyone's goals, and so you go behind the veil of ignorance, and forget who you are. The difference is that now you are not trying to maximize expected expected utility, and instead are trying to maximize worst-case expected utility.

Nitpick: I think this is a somewhat controversial and nonstandard definition of egalitarianism. Rather, this is the decision theory underlying Rawls' 'justice as fairness'; and, yes, Rawls claimed that his theory was egalitarian (if I remember correctly), but this has come under much scrutiny. See Egalitarianism against the Veil of Ignorance by Roemer, for example.

I agree that the latter two examples have Moorean vibes, but I don't think they strictly speaking can be classified as such (especially the last one). (Perhaps you are not saying this?) They could just be understood as instances of modus tollens, where the irrationality is not that they recognize that their belief has a non-epistemic generator, but rather that they have an absurdly high credence in , i.e. "my parents wouldn't be wrong" and "philosophers could/should not be out of jobs".

The same holds if Alice is confident in Bob's relevant conditional behavior for some other reason, but can't literally view Bob's source code. Alice evaluates counterfactuals based on "how would Bob behave if I do X? what about if I do Y?", since those are the differences that can affect utility; knowing the details of Bob's algorithm doesn't matter if those details are screened off by Bob's functional behavior.

Hm. What kind of dependence is involved here? Doesn't seem like a case of subjunctive dependence as defined in the FDT papers; the two algorithms are not related in any way beyond that they happen to be correlated.

Alice evaluates counterfactuals based on "how would Bob behave if I do X? what about if I do Y?", since those are the differences that can affect utility...

Sure, but so do all agents that subscribe to standard decision theories. The whole DT debate is about what that means.

Load More