Homunculi are real. Consider a lucid dream. When lucid, you can know that your body-image is entirely internal to your sleeping brain. You can know that the virtual head you can feel with your virtual hands is entirely internal to your sleeping brain too. Sure, the reality of this homunculus doesn’t explain how the experience is possible. Yet such an absence of explanatory power doesn’t mean that we should disavow talk of homunculi.
Waking consciousness is more controversial. But (I’d argue) you can still experience only a homunculus - but now it’s a homunculus that (normally) causally do-varies with the behaviour of an extra-cranial body.
It's good to know we agree on genetically phasing out the biology of suffering! Now for your thought-experiments.
Quantitatively, given a choice between a tiny amount of suffering X + everyone and everything else being great, or everyone dying, NU's would choose omnicide no matter how small X is?
To avoid status quo bias, imagine you are offered the chance to create a type-identical duplicate, New Omelas - again a blissful city of vast delights dependent on the torment of a single child. Would you accept or decline? As an NU, I'd say "no" - even though the child’s suffering is "trivial" compared to the immensity of pleasure to be gained. Likewise, I’d painlessly retire the original Omelas too. Needless to say, our existing world is a long way from Omelas. Indeed, if we include nonhuman animals, then our world may contain more suffering than happiness. Most nonhuman animals in Nature starve to death at a early age; and factory-farmed nonhumans suffer chronic distress. Maybe the CU should press a notional OFF button and retire life too.
A separate but related question: What if we also make it so that X doesn't happen for sure, but rather happens with some probability. How low does that probability have to be before NUs would take the risk, instead of choosing omnicide? Is any probability too low?
You pose an interesting hypothetical that I’d never previously considered. If I could be 100% certain that NU is ethically correct, then the slightest risk of even trivial amounts of suffering is too high. However, prudence dictates epistemic humility. So I’d need to think some more before answering.
Back in the real world, I believe (on consequentialist NU grounds) that it's best to enshrine in law the sanctity of human and nonhuman animal life. And (like you) I look forward to the day when we can get rid of suffering - and maybe forget NU ever existed.
It wasn't a rhetorical question; I really wanted (and still want) to know your answer.
Thanks for clarifying. NU certainly sounds a rather bleak ethic. But NUs want us all to have fabulously rich, wonderful, joyful lives - just not at the price of anyone else's suffering. NUs would "walk away from Omelas". Reading JDP's post, one might be forgiven for thinking that the biggest x-risk was from NUs. However, later this century and beyond, if (1) “omnicide” is technically feasible, and if (2) suffering persists, then there are intelligent agents who would bring the world to an end to get rid of it. You would end the world too rather than undergo some kinds of suffering. By contrast, genetically engineering a world without suffering, just fanatical life-lovers, will be safer for the future of sentience - even if you think the biggest threat to humanity comes from rogue AGI/paperclip-maximizers.
Do they also seek to create and sustain a diverse variety of experiences above hedonic zero?
Would the prospect of being unable to enjoy a rich diversity of joyful experiences sadden you? If so, then (other things being equal) any policy to promote monotonous pleasure is anti-NU.
Secular Buddhists like NUs seek to minimise and ideally get rid of all experience below hedonic zero. So does any policy option cause you even the faintest hint of disappointment? Well, other things being equal, that policy option isn't NU. May all your dreams come true!Anyhow, I hadn't intended here to mount a defence of NU ethics - just counter the poster JDP's implication that NU is necessarily more of an x-risk than CU.
Many thanks for an excellent overview. But here's a question. Does an ethic of negative utilitarianism or classical utilitarianism pose a bigger long-term risk to civilisation?
Naively, the answer is obvious. If granted the opportunity, NUs would e.g. initiate a vacuum phase transition, program seed AI with a NU utility function, and do anything humanly possible to bring life and suffering to an end. By contrast, classical utilitarians worry about x-risk and advocate Longtermism (cf. https://www.hedweb.com/quora/2015.html#longtermism).
However, I think the answer is more complicated. Negative utilitarians (like me) advocate a creating a world based entirely on gradients of genetically programmed well-being. In my view, phasing out the biology of mental and physical pain in favour of a new motivational architecture is the most realistic way to prevent suffering in our forward light-cone. By contrast, classical utilitarians are committed, ultimately, to some kind of apocalyptic "utilitronium shockwave” – an all-consuming cosmic orgasm. Classical utilitarianism says we must maximize the cosmic abundance of pure bliss. Negative utilitarians can uphold complex life and civilisation.
Can preference utilitarians, classical utilitarians and negative utilitarians hammer out some kind of cosmological policy consensus? Not ideal by anyone's lights, but good enough? So long as we don't create more experience below "hedonic zero" in our forward light-cone, NUs are untroubled by wildly differing outcomes. There is clearly a tension between preference utilitarianism and classical utilitarianism; but most(?) preference utilitarians are relaxed about having hedonic ranges shifted upwards - perhaps even radically upwards - if recalibration is done safely, intelligently and conservatively - a big "if", for sure. Surrounding the sphere of sentient agents in our Local Supercluster(?) with a sea of hedonium propagated by von Neumann probes or whatever is a matter of indifference to most preference utilitarians and NUs but mandated(?) by CU.
Is this too rosy a scenario?
Eli, sorry, could you elaborate? Thanks!
Eli, fair point.
Eli, it's too quick to dismiss placing moral value on all conscious creatures as "very warm-and-fuzzy". If we're psychologising, then we might equally say that working towards the well-being of all sentience reflects the cognitive style of a rule-bound hyper-systematiser. No, chickens aren't going to win any Fields medals - though chickens can recognise logical relationships and perform transitive inferences (cf. the "pecking order"). But nonhuman animals can still experience states of extreme distress. Uncontrolled panic, for example, feels awful regardless of your species-identity. Such panic involves a complete absence or breakdown of reflective self-awareness - illustrating how the most intense forms of consciousness don't involve sophisticated meta-cognition.
Either way, if we can ethically justify spending, say, $100,000 salvaging a 23-week-old human micro-preemie, then impartial benevolence dictates caring for beings of greater sentience and sapience as well - or at the very least, not actively harming them.