No posts to display.
I'm not sure if it's possible to separate "felt intensity" from "intensity of desire". (I don't know what pain/suffering without a desire that it not exist would be.) But however that may be, your point doesn't seem to settle the population-ethical issue: If we look at hedonic desires (weighted by i...(read more)
Sorry for the delay. - I should have been more precise. I'll provide more precision by commenting on the cases you mention:
- The injured pet case probably involves three complications: (1) people's belief that it's an "egoistic" case *for the pet* (instead of it being an "altruistic" trade-off cas...(read more)
Not so sure. Dave believes that pains have an "ought-not-to-be-in-the-world-ness" property that pleasures lack. And in the discussions I have seen, he indeed was not prepared to accept that small pains can be outweighed by huge quantities of pleasure.
Brian was oscillating between NLU and NU. He re...(read more)
Regarding "people's ordinary exchange rates", I suspect that *in cases people clearly recognize as altruistic*, the rates are closer to Brian's than to yours. In cases they (IMO confusedly) think of as "egoistic", the rates may be closer to yours. - This provides an argument that people should end u...(read more)
Also, still others (such as David Pearce) would argue that there are reasons to favor Brian's exchange rate. :)
One important reason they like to discuss them is the fact that many people just assume, without adequate consideration and argument, that the future will be hugely net positive. Which comes at no surprise, given the existence of relevant biases.
Whether negative utilitarians believe that "there is...(read more)
Yes, it can. But a Singleton is not guaranteed; and conditional on the future existence of a Singleton, friendliness is not guaranteed. What I meant was that astronomical population expansion clearly produces an astronomical number of most miserable, tortured lives *in expectation*.
Lots of dystopi...(read more)
Sorry for the delay!
I forgot to clarify the rough argument for why (1) "value future people equally" is much less important or crucial than (2) "fill the universe with people" here.
If you accept (2), you're almost guaranteed to be on board with where Bostrom and Beckstead are roughly going (ev...(read more)
Hi Nick, thanks! I do indeed fully agree with your general conclusion that what matters most is making our long-term development go as well as possible. (I had something more specific in mind when speaking of "Bostrom's and Beckstead's conclusions" here, sorry about the confusion.) In fact, I consid...(read more)
What about a random human instead of your grandmother? What if the human's/your grandmother's cognitive capacities were lower than the dog's or the chimp's? – What would a good altruist do?
How do you block the "chain of comparables"?