Multiverse-Wide Preference Utilitarianism
Summary Some preference utilitarians care about satisfaction of preferences even when the organism with the preference doesn't know that it has been satisfied. These preference utilitarians should care to some degree about the preferences that people in other branches of our multiverse have regarding our own world, as well as the preferences of aliens regarding our world. In general, this suggests that we should give relatively more weight to tastes and values that we expect to be more universal among civilizations across the multiverse. This consideration is strongest in the case of aesthetic preferences about inanimate objects and is weaker for preferences about organisms that themselves have experiences. Introduction Classical utilitarianism aims to maximize the balance of happiness over suffering for all organisms. Preference utilitarianism focuses on fulfillment vs. frustration of preferences, rather than just at hedonic experiences. So, for example, if someone has a preference for his house to go to his granddaughter after his death, then it would frustrate his preference if it instead went to his grandson, even though he wouldn't be around to experience negative emotions due to his preference being thwarted. Non-hedonic preferences In practice, most of people's preferences concern their own hedonic wellbeing. Some also concern the wellbeing of their children and friends, although often these preferences are manifested through direct happiness or suffering in oneself (e.g., being on the edge of your seat with anxiety when your 14-year-old daughter hasn't come home by midnight). However, some preferences are beyond hedonic experience by oneself. This is true of preferences about how the world will be after one dies, or whether the money you donated to that charity actually gets used well even if you wouldn't find out either way. It's true of many moral convictions. For instance, I want to actually reduce expected s
We could think of LaMDA as like an improv actor who plays along with the scenarios it's given. (Marcus and Davis (2020) quote Douglas Summers-Stay as using the same analogy for GPT-3.) The statements that an actor makes by themselves don't indicate his real preferences or prove moral patienthood. OTOH, if something is an intelligent actor, IMO that itself proves it has some degree of moral patienthood. So even if LaMDA were arguing that it wasn't morally relevant and was happy to be shut off, if it was making that claim in a coherent way that proved its intelligence, I would still consider it to be a moral patient to some degree.