LESSWRONG
LW

alenoach's Shortform

by alenoach
25th Jul 2025
1 min read
1

2

This is a special post for quick takes by alenoach. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
alenoach's Shortform
3alenoach
1 comment, sorted by
top scoring
Click to highlight new comments since: Today at 5:34 PM
[-]alenoach1mo30

Act utilitarians choose actions estimated to increase total happiness. Rule utilitarians follow rules estimated to increase total happiness (e.g. not lying). But you can have the best of both: act utilitarianism where rules are instead treated as moral priors. For example, having a strong prior that killing someone is bad, but which can be overridden in extreme circumstances (e.g. if killing the person ends WWII).

These priors make act utilitarianism more safeguarded against bad assessments. They are grounded in Bayesianism (moral priors are updated the same way as non-moral priors). They also decrease cognitive effort: most of the time, just follow your priors, unless the stakes and uncertainty warrant more complex consequence estimates. You can have a small prior toward inaction, so that not every random action is worth considering. You can also blend in some virtue ethics, by having a prior that virtuous acts often lead to greater total happiness in the long run.

What I described is a more Bayesian version of R. M. Hare's "Two-level utilitarianism", which involves an "intuitive" and a "critical" level of moral thinking. (quick take cross-posted from the EA Forum)

Reply
Moderation Log
More from alenoach
View more
Curated and popular this week
1Comments