I think that's true, but also pretty much the same as what many or most veg or reducetarian EAs did when they decided what diet to follow (and other non-food animal products to avoid), including what exceptions to allow. If the consideration of why not to murder counts as involving math, so does veganism for many or most EAs, contrary to Zvi's claim. Maybe some considered too few options or possible exceptions ahead of time, but that doesn't mean they didn't do any math.
This is also basically how I imagine rule consequentialism to work: you decide what rules to follow ahead of time, including prespecified exceptions, based on math. And then you follow the rules. You don't redo the math for each somewhat unique decision you might face, except possibly very big infrequent decisions, like your career or big donations. You don't change your rule or make a new exception right in the situation where the rule would apply, e.g. a vegan at a restaurant, someone's house or a grocery store. If you change or break your rules too easily, you undermine your own ability to follow rules you set for yourself.
But also, EA is compatible with the impermissibility of instrumental harm regardless of how the math turns out (although I have almost no sympathy for absolutist deontological views). AFAIK, deontologists, including absolutist deontologists, can defend killing in self-defense without math and also think it's better to do more good than less, all else equal.
Well, there could be ways to distinguish, but it could be like a dream, where much of your reasoning is extremely poor, but you're very confident in it anyway. Like maybe you believe that your loved ones in your dream saying the word "pizza" is overwhelming evidence of their consciousness and love for you. But if you investigated properly, you could find out they're not conscious. You just won't, because you'll never question it. If value is totally subjective and the accuracy of beliefs doesn't matter (as would seem to be the case on experientialist accounts), then this seems to be fine.
Do you think simulations are so great that it's better for people to be put into them against their wishes, as long as they perceive/judge it as more meaningful or fulfilling, even if they wouldn't find it meaningful/fulfilling with accurate beliefs? Again, we can make it so that they don't find out.
Similarly, would involuntary wireheading or drugging to make people find things more meaningful or fulfilling be good for those people?
Or, something like a "meaning" shockwave, similar to a hedonium shockwave, — quickly killing and replacing everyone with conscious systems that take no outside input or even have sensations (or only the bare minimum) other than to generate feelings or judgements of meaning, fulfillment, or love? (Some person-affecting views could avoid this while still matching the rest of your views.)
Of course, I think there are good practical reasons to not do things to people against their wishes, even when it's apparently in their own best interests, but I think those don't capture my objections. I just think it would be wrong, except possibly in limited cases, e.g. to prevent foreseeable regret. The point is that people really do often want their beliefs to be accurate, and what they value is really intended — by their own statements — to be pointed at something out there, not just the contents of their experiences. Experientialism seems like an example of Goodhart's law to me, like hedonism might (?) seem like an example of Goodhart's law to you.
I don't think people and their values are in general replaceable, and if they don't want to be manipulated, it's worse for them (in one way) to be manipulated. And that should only be compensated for in limited cases. As far as I know, the only way to fundamentally and robustly capture that is to care about things other than just the contents of experiences and to take a kind of preference/value-affecting view.
Still, I don't think it's necessarily bad or worse for someone to not care about anything but the contents of their experiences. And if the state of the universe was already hedonium or just experiences of meaning, that wouldn't be worse. It's the fact that people do specifically care about things beyond just the contents of their experiences. If they didn't, and also didn't care about being manipulated, then it seems like it wouldn't necessarily be bad to manipulate them.
I think a small share of EAs would do the math before deciding whether or not to commit fraud or murder, or otherwise cause/risk involuntary harm to other people, and instead just rule it out immediately or never consider such options in the first place. Maybe that's a low bar, because the math is too obvious to do?
What other important ways would you want (or make sense for) EAs to be more deontological? More commitment to transparency and against PR?
Maximizing just for expected total pleasure, as a risk neutral classical utilitarian? Maybe being okay with killing everyone or letting everyone die (from AGI, say), as long as the expected payoff in total pleasure is high enough?
I don't really see a very plausible path for SBF to have ended up with enough power to do this, though. Money only buys you so much, against the US government and military, unless you can take them over. And I doubt SBF would destroy us with AGI if others weren't already going to.
Where I agree with classical utilitarianism is that we should compute goodness as a function of experience, rather than e.g. preferences or world states
Isn't this incompatible with caring about genuine meaning and fulfillment, rather than just feelings of them? For example, it's better for you to feel like you're doing more good than to actually do good. It's better to be put into an experience machine and be systematically mistaken about everything you care about, i.e. that the people you love even exist (are conscious, etc.) at all, even against your own wishes, as long as it feels more meaningful and fulfilling (and you never find out it's all fake, or that can be outweighed). You could also have what you find meaningful changed against your wishes, e.g. made to find counting blades of grass very meaningful, more so than caring for your loved ones.
FWIW, this is also an argument for non-experientialist "preference-affecting" views, similar to person-affecting views. On common accounts of weigh or aggregate, if there are subjective goods, then they can be generated and outweigh the violation and abandonment of your prior values, even against your own wishes, if they’re strong enough.
And if emotionally significant social bonds don't count, it seems like we could be throwing away what humans typically find most important in their lives.
Of course, I think there are potentially important differences. I suspect humans tend to be willing to sacrifice or suffer much more for those they love than (almost?) all other animals. Grief also seems to affect humans more (longer, deeper), and it's totally absent in many animals.
On the other hand, I guess some other animals will fight to the death to protect their offspring. And some die apparently grieving. This seems primarily emotionally driven, but I don't think we should discount it for that fact. Emotions are one way of making evaluations, like other kinds of judgements of value.
EDIT: Another possibility is that other animals form such bonds and could even care deeply about them, but don't find them "meaningful" or "fulfilling" at all or in a way as important as humans do. Maybe those require higher cognition, e.g. concepts of meaning and fulfillment. But it seems to me that the deep caring, in just emotional and motivational terms, should be enough?
Ya, I don't think utilitarian ethics is invalidated, it's just that we don't really have much reason to be utilitarian specifically anymore (not that there are necessarily much more compelling reasons for other views). Why sum welfare and not combine them some other way? I guess there's still direct intuition: two of a good thing is twice as good as just one of them. But I don't see how we could defend that or utilitarianism in general any further in a way that isn't question-begging and doesn't depend on arguments that undermine utilitarianism when generalized.
You could just take your utility function to be where is any bounded increasing function, say arctan, and maximize the expected value of that. This doesn't work with actual infinities, but it can handle arbitrary prospects over finite populations. Or, you could just rank prospects by stochastic dominance with respect to the sum of utilities, like Tarsney, 2020.
You can't extend it the naive way, though, i.e. just maximize whenever that's finite and then do something else when it's infinite or undefined, though. One of the following would happen: the money pump argument goes through again, you give up stochastic dominance or you give up transitivity, each of which seems irrational. This was my 4th response to Infinities are generally too problematic.
The argument can be generalized without using infinite expectations, and instead using violations of Limitedness in Russell and Isaacs, 2021 or reckless preferences in Beckstead and Thomas, 2023. However, intuitively, it involves prospects that look like they should be infinitely valuable or undefinably valuable relative to the things they're made up of. Any violation of (the countable extension of) the Archimedean Property/continuity is going to look like you have some kind of infinity.
The issue could just be a categorization thing. I don't think philosophers would normally include this in "infinite ethics", because it involves no actual infinities out there in the world.
Also, I'd say what I'm considering here isn't really "infinite ethics", or at least not what I understand infinite ethics to be, which is concerned with actual infinities, e.g. an infinite universe, infinitely long lives or infinite value. None of the arguments here assume such infinities, only infinitely many possible outcomes with finite (but unbounded) value.
I would be surprised if iguanas find things meaningful that humans don't find meaningful, but maybe they desire some things pretty alien to us. I'm also not sure they find anything meaningful at all, but that depends on how we define meaningfulness.
Still, I think focusing on meaningfulness is also too limited. Iguanas find things important to them, meaningful or not. Desires, motivation, pleasure and suffering all assign some kind of importance to things.
In my view, either