There is a non-negligible probability that insects suffer. Many insects have nociception. Furthermore, the argument-by-analogy approach to determining whether an animal suffer works—an insect’s response to stimuli that would cause pain in other animals is similar. It is estimated that there are 10^19 insects living at any point in time. If one believes that dust specks on a sufficiently large number of people (this doesn’t have to be 3^^^3 after all) is worse than torture of one person, does that mean insect suffering should swamp many other considerations when prioritizing social problems to address? (This is not meant to be a “proves-too-much” argument against siding with dust specks; it’s a genuine question.)

New Answer
New Comment

3 Answers sorted by

Dagon

70

I have a theory that variety in mind-state-space is more important than quantity of near-duplicates, but the proof is too large to fit in this margin.

If you use a logarithmic aggregation of value, rather than linear, and/or if you discount for complexity of mind (maybe intensity of experience proportional to square of involved neurons), it's going to take a LOT more than 10^19 insects to compare to 10^9 humans.

This is a good answer.

Yitz

10

I don’t wish to directly argue the question at the moment, but let’s say insect suffering is in fact the highest-priority issue we should consider. If so, I’m fairly sure that practically, little would be changed as a result. X-risk reduction is just as important for insects as it is for us, so that should still be given high priority. The largest effect we currently have on insect suffering—and in fact an X-risk factor in itself for insects—is through our collective environmental pollution, so stopping human pollution and global warming as much as possible will be paramount after high-likelihood X-risk issues. In order to effectively treat issues of global human pollution of the environment, some form of global political agreement must be reached about it, which can be best achieved by [Insert your pet political theory here]. In other words, whatever you believe will be best for humans long-term will probably also be best for insects long-term.

I think doesn’t work if you think insects lead overwhelmingly net negative lives right now, and the world would be a better place if fewer insects are around to reproduce/create huge numbers of awful lives. But I might be missing something.

2Tejas Subramaniam
At the same time, it’s more than plausible that the extinction of humans would be very bad for insects, because their habitats would grow significantly without humans. But anyway, I agree that even if insect suffering is really massive, it doesn’t swamp x-risk consideration. (Personally, I don’t think insect suffering matters much at all, though that’s really more of an instinct on “torture vs. dust specks” in general, though it does confuse me as an issue.) I’m just wondering how important it is in the scale of things. Thanks for the response though! I appreciate it.

Donald Hobson

00

Firstly, if you are prepared to look at utilitarian style, look how big this number is arguments, then X-risk reduction comes out on top.

The field that this is pointing to is how to handle utility uncertainty. Suppose you have several utility functions, and you don't yet know which you want to maximise, but you might find relevant info in the future. You can act to maximise expected utility. The problem is that if there are many utility functions, then some of them might control your behaviour despite having tiny probability by outputting absurdly huge numbers. This is pascals mugging, and various ideas have been proposed to avoid it. Some include rescaling the utility functions in various ways, or acting according to a weighted vote of the utility functions.

There is also a question of how much moral uncertainty to regard ourselves as having. Our definitions of what we do and don't care about exist in our mind. It is a consistent position to decide that you definitely don't care about insects, and any event that makes future-you care about insects is unwanted brain damage. Moral theories like utilitarianism are at least partly predictive theories. If you came up with a simple (Low komelgorov complexity) moral theory that reliably predicted humans moral judgements, that would be a good theory. However, humans also have logical uncertainty, and suspect that our utility function is of low "human perceived complexity". So given a moral theory of low "human perceived complexity" which agrees with our intuitions on 99% of cases, we may change our judgement on the remaining 1%. (Perform a Bayesian update under utility uncertainty with the belief that our intuitions are usually but not always correct.)

So we can argue that utilitarianism usually matches our intuitions, so is probably the correct moral theory, so we should trust it even in the case of insects where it disagrees. However, you have to draw the line between care and don't care somewhere, and the version of utilitarianism that draws the line round mammals or humans doesn't seem any more complicated. And it matches our intuitions better. So its probably correct.

If you don't penalise unlikely possible utilities for producing huge utilities, you get utility functions in which you care about all quantum wave states dominating your actions. (Sure, you assign a tiny probability to it, but there are a lot of quantum wave states.) If you penalise strongly, or use voting utilities or bounded utilities then you get behaviour that doesn't care about insects. If you go up a meta level, and say you don't know how much to penalize, standard uncertainty treatment gets you back to no penalty, quantum wave state dominated actions.

I have a sufficiently large amount of uncertainty to say "In practice, it usually all adds up to normality. Don't go acting on weird conclusions you don't understand that are probably erroneous."

2 comments, sorted by Click to highlight new comments since:

Maybe think of the insects and other organisms that would greatly overwhelm human suffering as one big utility monster.

[-]TAG20

Maybe don't. There is no metaphysical fact that takes the micro sufferings of lots of agents and turn them into the mega suffering of one agent. That kind of summation is on the map not the territory. It's optional, not forced on you by reality.