My formulation can handle lexicality according to which any amount of A (or anything greater than a certain increment in A) outweighs any (countable amount) of B, not just finite amounts up to some bound. The approach you take is more specific to empirical facts about the universe; if you want it to give a bounded utility function, you need a different utility function for different possible universes. If you learn that your bounds were too low (e.g. that you can in fact affect much more than you thought before), in order to preserve lexicality, you'd need to change your utility function, which is something we'd normally not want to do.
Of course, my approach doesn't solve infinite ethics in general; if you're adding goods and bads that are commensurable, you can get divergent series, etc.. And, as I mentioned, you sacrifice additivity, which is a big loss.
On your lexicographic utility function, I think it's pretty ad hoc that it depends on explicit upper bounds on the quantities, which will depend on the specifics of our universe, but you can manage without them and allow unbounded quantities (and countably infinitely many, but I would be careful going further), unfortunately at the cost of additivity. I wrote about this here.
This is what rule and virtue (and global) consequentialism are for. You don't need to be calculating all the time, and as you point out, that might be counterproductive. But every now and then, you should (re)evaluate what rules to follow and what kind of character you want to cultivate.
And I don't mean this as saying rule or virtue consequentialism is the correct moral theory; I just mean that you should use rules and virtues, as a practical matter, since it leads to better consequences.
Sometimes you will want to break a rule. This can be okay, but should not be taken lightly, and it would be better if your rule included its exceptions. A rule can be something like a very strong prior towards/against certain kinds of acts.
From the blog of Andrew Gelman, one of the authors of the Economist's model:
As noted above, I think that with wider central intervals and wider tails we could lower that Biden win probability from 96% to 90% or maybe 80%. But, given what the polls say now, to get it much lower than that you’d need a directional shift, something asymmetrical, whether it comes from the possibility of vote suppression, or turnout, or problems with survey responses, or differential nonresponse not captured by partisanship adjustment, or something else I’m forgetting right now. But I don’t think it would be enough just to say that anything can happen. “Anything can happen” starting with Biden at 54% will lead to a high Biden win probability now matter how you slice it. For example, suppose you start with a Biden forecast at 54% and give a standard error of 3 percentage points, which has gotta be too much—it yields a 95% interval of [0.48, 0.60] for his two-party vote share, and nobody thinks he’s gonna get 48% or 60%. Anyway, start with that and Biden still has a 78% chance of winning (or 75% using the t_3 distribution). To get that probability down below 80%, you’re gonna need to shift the mean estimate, which implies some directional information.
Older discussion of the Economist's model and betting markets by Gelman:
Do you think FiveThirtyEight and the Economist haven't appropriately accounted for these considerations in their models? I don't think the discrepancy with the markets are so large. Where did ~65% come from?
Are there any important considerations in the opposite direction?
Andrew Gelman, super legit (imo) statistician who built The Economist's model, criticizes 538's model for getting correlations wrong:
The Economist's predictions are even more favourable to Biden, though:
Nonhuman animals and children have limited agency, irrational and poorly informed preferences. We should use behaviour as an indication of preferences, but not only behaviour and especially not only behaviour when faced with the given situation (since other behaviour is also relevant). We should try to put ourselves in their shoes and reason about what they would want were they more rational and better informed. The more informed and rational, the more we can just defer to their choices.
If I give that same "agentic being" treatment to animals, then the suicide argument kind-of-hold. If I don't give that same "agentic being" treatment to animals, then what is to say suffering as a concept even applies to them ? After all a mycelia or an ecosystem is also a very complex "reasoning" machine but I don't feel any moral guilt when plucking a leaf or a mushroom.
I think this is a good discussion of evidence for the capacity to suffer in several large taxa of animals.
I think also not having agency is not a defeater for suffering. You can imagine in some of our worst moments of suffering that we lose agency (e.g. in a state of panic), or that we could artificially disrupt someone's agency (e.g. through transcranial magnetic stimulation, drugs or brain damage) without taking the unpleasantness of an experience away. Just conceptually, agency isn't required for hedonistic experience.
Supplements are part of a person's diet. Vegans who don't take B12 are being stupid.