I think an important piece that's missing here is that LW simply assumes that certain answers to important questions are correct. It's not just that there are social norms that say it's OK to dismiss ideas as stupid if you think they're stupid, it's that there's a rough consensus on which ideas are stupid.
LW has a widespread consensus on bayseian epistemology, physicalist metaphysics and consequentialist ethics (not an exhaustive list). And it has good reasons for favoring these positions, but I don't think LW has great responses to all the arguments against these positions. Neither do the alternative positions have great responses to counterarguments from the LW-favored positions.
Analytic philosophy in the academy is stuck with a mess of incompatible views, and philosophers only occasionally succeed in organizing themselves into clusters that share answers to a wide range of fundamental questions.
And they have another problem stemming from the incentives in publishing. Since academic philosophers want citations, there's an advantage to making arguments that don't rely on particular answers to questions where there isn't widespread agreement. Philosophers of science will often avoid invoking causation, for instance, since not everyone believes in it. It takes more work to argue in that fashion, and it constrains what sorts of conclusions you can arrive at.
The obvious pitfalls of organizing around a consensus on the answers to unsolved problems are obvious.
I also had trouble with the notation. Here's how I've come to understand it:
Suppose I want to know whether the first person to drive a car was wearing shoes, just socks, or no footwear at all when they did so. I don't know what the truth is, so I represent it with a random variable , which could be any of "the driver wore shoes," "the driver wore socks" or "the driver was barefoot."
This means that is a random variable equal to the probability I assign to the true hypothesis (it's random because I don't know which hypothesis is true). It's distinct from and which are both the same constant, non-random value, namely the credence I have in the specific hypothesis (i.e. "the driver wore shoes").
( is roughly "the credence I have that 'the driver wore shoes' is true," while is "the credence I have that the driver wore shoes," so they're equal, and semantically equivalent if you're a deflationist about truth)
Now suppose I find the driver's great-great-granddaughter on Discord, and I ask her what she thinks her great-great-grandfather wore on his feet when he drove the car for the first time. I don't know what her response will be, so I denote it with the random variable . Then is the credence I assign to the correct hypothesis after I hear whatever she has to say.
So is equivalent to and means "I shouldn't expect my credence in 'the driver wore shoes' to change after I hear the great-great-granddaughter's response," while means "I should expect my credence in whatever is the correct hypothesis about the driver's footwear to increase when I get the great-great-granddaughter's response."
I think there are two sources of confusion here. First, was not explicitly defined as "the true hypothesis" in the article. I had to infer that from the English translation of the inequality,
In English the theorem says that the probability we should expect to assign to the true value of H after observing the true value of D is greater than or equal to the expected probability we assign to the true value of H before observing the value of D,
and confirm with the author in private. Second, I remember seeing my probability theory professor use sloppy shorthand, and I initially interpreted as a sloppy shorthand for . Neither of these would have been a problem if I were more familiar with this area of study, but many people are less familiar than I am.
I think there's some ambiguity in your phrasing and that might explain gjm's disagreement:
You seem to value the (psychological factor of having debt) at zero.
Or
You seem to value the psychological factor of (having debt at zero).
These two ways of parsing it have opposite meanings. I think you mean the former but I initially read it as the latter, and reading gjm's initial comment, I think they also read it as the latter.
I'm attracted to viewing these moral intuitions as stemming from intuitions about property because the psychological notion of property biologically predates the notion of morality. Territorial behaviors are found in all kinds of different mammals, and prima facie the notion of property seems to be derived from such behaviors. The claim, then, is that during human evolution, moral psychology developed in part by coopting the psychology of territory.
I'm skeptical that anything normative follows from this though.
Are there plans to support email notifications? Having to poll the notification tray to check for replies to posts and comments is not ideal.
What happens the next time the same thing happens? Am I, Bob, supposed to just “accept reality” no matter how many times Alice messes up and does a thing that harms or inconveniences me, and does Alice owe me absolutely nothing for her mistakes?
If Alice has, to use the phrase I used originally, "aquired a universal sense of duty," then the hope is that it is less likely for the same thing to happen again. Alice doesn't need to feel guilty or at fault for the actions, she just acknowledges that the outcome was undesirable, and that she should try to adjust her future behavior in such a way as to make similar situations less likely to arise in the future. Bob, similarly, tries to adjust his future behavior to make similiar situations less likely to arise (for example, by giving Alice a written reminder of what she was supposed to get at the store).
The notion of "fault" is an oversimplification. Both Alice's and Bob's behavior contributed to the undesirable outcome, it's just that Alice's behavior (misremembering what she was supposed to buy) is socially-agreed to be blameworthy and Bob's behavior (not giving Alice a written reminder) is socially-agreed to be perfectly OK. We could have different norms, and then the blame might fall on Bob for expecting Alice to remember something without writing it down for her. I think that would be a worse norm, but that's not important; the norm that we have isn't optimal because it blinds Bob to the fact that he also has the power to reduce the chance of the bad outcome repeating itself.
HWA addresses this, but not without introducing other flaws. Our norms of guilt and blame are better at compelling people to change their behavior. HWA relies on people caring about and having the motiviation to prevent repeat bad outcomes purely for the sake of preventing repeat bad outcomes. Guilt and blame give people external interest and motivation to do so.
I think philh is using it in the first way you described, just while honoring the fact that potential future deals factor into how desirable a deal is for each party. We do this implicitly all the time when money is involved: coming away from a deal with more money is desirable only because that money makes the expected outcomes of future deals more desirable. That's intuitive because it's baked into the concept of money, but the same consideration can apply in different ways.
Acknowledging this, we have to consider the strategic advantages that each party has as assets at play in the deal. These are usually left implicit and not obvious. So in the case of re-opening Platform 3, the party in favor of making the platform accessible has a strategic advantage if no deal is made, but loses that advantage if the proposed deal is made. The proposed deal, therefore, is not a Pareto improvement compared to not making a deal.
I think I more or less try to live my life along the lines of HWA, and it seems to go well for me, but I wonder if that says more about the people I choose to associate with than the inherent goodness of the attitude. HWA works when people are committed to making things go better in the future regardless of whose fault it is. But not everyone thinks that way all the time. Some people haven't acquired a universal sense of duty, they only feel duty when they attribute blame to themselves, and feel a grudging sense of unfairness if asked to care about fixing something that isn't their fault. HWA would not work for them, unless they always understood it to mean "other person's responsibility" and became moral freeloaders.
Even among those better suited to HWA, I think it's still less than ideal, because it suppresses consensus-building. I think inevitably people will still think about whose fault something is, but once someone utters "HWA," they won't share their assessments. When people truly honor the spirit of HWA this won't matter, because they won't ascribe much significance to guilt and innocence, but the stories we tell about our lives are structured around the institutions of guilt and innocence, and by imposing a barrier to sharing our stories with one another, we come to each live our own story, which I fear is what tears communities and societies apart. HWA may be good for friendships, but I'm not sure it's good on larger scales of human interactions.
I'm not sure at this point what my goal was with this post, it would be too easy to fall into motivated reasoning after this back-and-forth. So I agree with you that my post fails to give evidence for "consciousness can be based on person-slices," I just don't know if I ever intended to give that positive conclusion.
I do think that person-slices are entirely plausible, and a very useful analytical tool, as Parfit found. I have other thoughts on consciousness which assume person-slices are a coherent concept. If this post is sufficient to make the burden of proof for the existence of person-slices not clearly fall to me, then it's served a useful purpose.
***
By the way, I did give a positive account for the existence of person slices, comparing the notion of a person slice to something that we more readily accept exists:
What would it be like to be a person-slice? This seems to me to be analogous to asking “how can we observe a snapshot of an electron in time?” We can’t! Observation can only be done over an interval of time, but just because we can’t observe electron-slices doesn’t mean that we shouldn’t expect to be able to observe electrons over time, nor does the fact that we can observe electrons over time suggest that electron-slices are a nonsensical concept. Likewise, if there’s nothing it’s like to be a person-slice, that doesn’t mean that person-slices are nonsense.
I worry that this doesn't really end up explaining much. We think that our answers to philosophical questions are better than what the analytics have come up with. Why? Because they seem intuitively to be better answers. What explanation do we posit for why our answers are better? Because we start out with better intuitions.
Of course our intuitions might in fact be better, as I (intuitively) think they are. But that explanation is profoundly underwhelming.
I'm not sure what you mean here, but maybe we're getting at the same thing. Having some explanation for why we might expect our intuitions to be better would make this argument more substantive. I'm sure that anyone can give explanations for why their intuitions are more likely to be right, but it's at least more constraining. Some possibilities:
I'm not confident that any of these are good explanations, but they illustrate the sort of shape of explanation that I think would be needed to give a useful answer to the question posed in the article.