conchis
conchis has not written any posts yet.

Additional/complementary argument in favour (and against the “any difference you make is marginal” argument): one’s personal example of viable veganism increases the chances of others becoming vegan (or partially so, which is still a benefit). Under plausible assumptions this effect could be (potentially much) larger the the direct effect of personal consumption decisions.
I have to say that the claimed reductios here strike me as under-argued, particularly when there are literally decades of arguments articulating and defending various versions of moral anti-realism, and which set out a range of ways in which the implications, though decidedly troubling, need not be absurd.
His 2018 lectures are also available on youtube and seem pretty good so far if anyone wants a complement to the book. The course website also has lecture notes and exercises.
To me, at least, it seems clear that you should not take the opportunities to reduce your torture sentence. After all, if you repeatedly decide to take them, you will end up with a 0.5 chance of being highly uncomfortable and a 0.5 chance of being tortured for 3^^^^3 years. This seems like a really bad lottery, and worse than the one that lets me have a 0.5 chance of having an okay life.
FWIW, this conclusion is not clear to me. To return to one of my original points: I don't think you can dodge this objection by arguing from potentially idiosyncratic preferences, even perfectly reasonable ones; rather, you need it to... (read more)
So, I don't think your concern about keeping utility functions bounded is unwarranted; I'm just noting that they are part of a broader issue with aggregate consequentialism, not just with my ethical system.
Agreed!
you just need to make it so the supremum of them their value is 1 and the infimum is 0.
Fair. Intuitively though, this feels more like a rescaling of an underlying satisfaction measure than a plausible definition of satisfaction to me. That said, if you're a preferentist, I accept this is internally consistent, and likely an improvement on alternative versions of preferentism.
One issue with only having boundedness above is that is that the expected of life satisfaction for an arbitrary agent would probably often be undefined or in expectation
Yes, and I am obviously not proposing a solution to this problem! More just suggesting that, if there are infinities in the problem... (read more)
In an infinite universe, there's already infinitely-many people, so I don't think this applies to my infinite ethical system.
YMMV, but FWIW allowing a system of infinite ethics to get finite questions (which should just be a special case) wrong seems a very non-ideal property to me, and suggests something has gone wrong somewhere. Is it really never possible to reach a state where all remaining choices have only finite implications?
I'll clarify the measure of life satisfaction I had in mind. Imagine if you showed an agent finitely-many descriptions of situations they could end up being in, and asked the agent to pick out the worst and the best of all of them. Assign the worst scenario satisfaction 0 and the best scenario satisfaction 1.
Thanks. I've toyed with similar ideas perviously myself. The advantage, if this sort of thing works, is that it conveniently avoids a major issue with preference-based measures: that they're not unique and therefore incomparable across individuals. However, this method seems fragile in relying on a finite number of scenarios: doesn't it break if it's possible to imagine something... (read 425 more words →)
Re boundedness:
It's important to note that the sufficiently terrible lives need to be really, really, really bad already. So much so that being horribly tortured for fifty years does almost exactly nothing to affect their overall satisfaction. For example, maybe they're already being tortured for more than 3^^^^3 years, so adding fifty more years does almost exactly nothing to their life satisfaction.
I realise now that I may have moved through a critical step of the argument quite quickly above, which may be why this quote doesn't seem to capture the core of the objection I was trying to describe. Let me take another shot.
I am very much not suggesting that 50 years... (read 582 more words →)
I can see the appeal, but I worry that a metaphor where a single person is given a single piece of software, and has an option to rewrite it for their own and/or others’ purpose without grappling with myriad upstream and downstream dependencies, vested interests, and so forth is probably missing an important part of the dynamics of real world systems?
(This doesn’t really speak to moral obligations to systems, as much as practical challenges doing anything about them, but my experience is that the latter is a much more binding constraint.)