lackofcheese
lackofcheese has not written any posts yet.

lackofcheese has not written any posts yet.

I don't think that's entirely correct; SSA, for example, is a halfer position and it does exclude worlds where you don't exist, as do many other anthropic approaches.
Personally I'm generally skeptical of averaging over agents in any utility function.
You definitely don't have a 50% chance of dying in the sense of "experiencing dying". In the sense of "ceasing to exist" I guess you could argue for it, but I think that it's much more reasonable to say that both past selves continue to exist as a single future self.
Regardless, this stuff may be confusing, but it's entirely conceivable that with the correct theory of personal identity we would have a single correct answer to each of these questions.
OK, the "you cause 1/10 of the policy to happen" argument is intuitively reasonable, but under that kind of argument divided responsibility has nothing to do with how many agents are subjectively indistinguishable and instead has to do with the agents who actually participate in the linked decision.
On those grounds, "divided responsibility" would give the right answer in Psy-Kosh's non-anthropic problem. However, this also means your argument that SIA+divided = SSA+total clearly fails, because of the example I just gave before, and because SSA+total gives the wrong answer in Psy-Kosh's non-anthropic problem but SIA+divided does not.
Ah, subjective anticipation... That's an interesting question. I often wonder whether it's meaningful.
As do I. But, as Manfred has said, I don't think that being confused about it is sufficient reason to believe it's meaningless.
As I mentioned earlier, it's not an argument against halfers in general; it's against halfers with a specific kind of utility function, which sounds like this: "In any possible world I value only my own current and future subjective happiness, averaged over all of the subjectively indistinguishable people who could equally be "me" right now."
In the above scenario, there is a 1/2 chance that both Jack and Roger will be created, a 1/4 chance of only Jack, and a 1/4 chance of only Roger.
Before finding out who you are, averaging would lead to a 1:1 odds ratio, and so (as you've agreed) this would lead to a cutoff of 1/2.
After finding out whether... (read more)
Linked decisions is also what makes the halfer paradox go away.
I don't think linked decisions make the halfer paradox I brought up go away. Any counterintuitive decisions you make under UDT are simply ones that lead to you making a gain in a counterfactual possible worlds at the cost of a loss in actual possible worlds. However, in the instance above you're losing both in the real scenario in which you're Jack, and in the counterfactual one in which you turned out to be Roger.
Granted, the "halfer" paradox I raised is an argument against having a specific kind of indexical utility function (selfish utility w/ averaging over subjectively indistinguishable agents) rather than... (read more)
But SIA also has some issues with order of information, though it's connected with decisions
Can you illustrate how the order of information matters there? As far as I can tell it doesn't, and hence it's just an issue with failing to consider counterfactual utility, which SIA ignores by default. It's definitely a relevant criticism of using anthropic probabilities in your decisions, because failing to consider counterfactual utility results in dynamic inconsistency, but I don't think it's as strong as the associated criticism of SSA.
... (read more)Anyway, if your reference class consists of people who have seen "this is not room X", then "divided responsibility" is no longer 1/3, and you probably have to go
That's not true. The SSA agents are only told about the conditions of the experiment after they're created and have already opened their eyes.
Consequently, isn't it equally valid for me to begin the SSA probability calculation with those two agents already excluded from my reference class?
Doesn't this mean that SSA probabilities are not uniquely defined given the same information, because they depend upon the order in which that information is incorporated?
I think that argument is highly suspect, primarily because I see no reason why a notion of "responsibility" should have any bearing on your decision theory. Decision theory is about achieving your goals, not avoiding blame for failing.
However, even if we assume that we do include some notion of responsibility, I think that your argument is still incorrect. Consider this version of the incubator Sleeping Beauty problem, where two coins are flipped.
HH => Sleeping Beauties created in Room 1, 2, and 3
HT => Sleeping Beauty created in Room 1
TH => Sleeping Beauty created in Room 2
TT => Sleeping Beauty created in Room 3
Moreover, in each room there is a sign. In Room... (read more)
There's no "should" - this is a value set.
The "should" comes in giving an argument for why a human rather than just a hypothetically constructed agent might actually reason in that way. The "closest continuer" approach makes at least some intuitive sense, though, so I guess that's a fair justification.
The halfer is only being strange because they seem to be using naive CDT. You could construct a similar paradox for a thirder if you assume the ticket pays out only for the other copy, not themselves.
I think there's more to it than that. Yes, UDT-like reasoning gives a general answer, but under UDT the halfer is still definitely acting strange in a... (read more)
I think there are some rather significant assumptions underlying the idea that they are "non-relevant". At the very least, if the agents were distinguishable, I think you should indeed be willing to pay to make n higher. On the other hand, if they're indistinguishable then it's a more difficult question, but the anthropic averaging I suggested in my previous comments leads to absurd results.
What's your proposal here?