This is inspired by a long and passionate post from  Bernd Clemens Huber. It was off-topic in its original context so I will respond to it here.

Briefly, Bernd makes a convincing case that most animals living in the wild spend most of their time experiencing fear, hunger, and suffering. He then goes on to say that this imposes an ethical obligation on humans to do everything possible to prevent seeding life on other planets to avoid spreading more suffering. Bernd, please respond and correct me if I'm not accurately summarizing your position.

I see two implicit axioms here that I would like to question explicitly.

  1. What if I don't buy the axiom that suffering is worse than non-existence? It would take a lot of fear and pain for me to consider my life not worth living. I would live as a brain in a jar rather than die. Probably most people place a lower value than I do on continuing to experience life no matter what, but that implies that the value of existence is subjective and you cannot choose for other individuals whether or not their life is worth living. Let alone entire species.

    Perhaps the hope that one's descendants will someday escape scarcity and predation like humans have makes one's current suffering "worth it"
     
  2. What if don't buy the axiom that it's my ethical duty to prevent the suffering of all other beings? What if I'm comfortable with the idea of people in my limited monkey-sphere being the ones whom I'm truly concerned about, and that concern radiating out some distance to strangers because what happens to them could come back to haunt myself and those close to me. The more removed someone is from me, the fewer resources I should expend per unit of their suffering. 

    Some of LessWrong's famous dilemmas can be seen instead as a reductio ad absurdium argument for my distance-penalized concern model:

    1. They eat their young?!!! Let aliens be alien, as long as it's not our young.
    2.  Someone is trying to get me to do something by helping/harming a large number of "copies" of me in some simulation? I let them enjoy their creepy video game, and this instance of me will continue living my life exactly as I have been in this instance of reality. 
    3. I have to give up on the dream of my descendants exploring the galaxy and possible also be on the hook to solve the very difficult problem of providing a happy life for the billions of lifeforms already inhabiting this planet? No, I do not. There are probably non-human organisms I'll need to protect for the sake of people I care about. But if there is no impact on people I care about then the only reason to care about non-human suffering is this axiom which I reject because it does not contribute to my sense of purpose and well-being while conflicting with axioms that do.

Note: If someone has already made these points, I'd be grateful for a link. Thanks.

New to LessWrong?

New Comment
8 comments, sorted by Click to highlight new comments since: Today at 1:12 PM

The more removed someone is from me, the fewer resources I should expend per unit of their suffering.

We could make this ethical theory quantifiable, by using some constant (a coefficient in the exponent of the distance-care function) such that E=1 means you care about everyone's suffering equally, E=0 means you do not care about anyone else's suffering at all, and then we could argue that the optimal value is e.g. E=0.7635554, and perhaps conclude that people with E > 0.7635554 are just virtue signalling, and people with E<0.7635554 are selfish assholes.