[This is an entry for lsusr’s write-like-lsusr competition.]
I meditated on suffering and came up with several half-baked ideas. Their apparent contradictions with each other and with my previous writings is a feature, not a bug; if you meditate on them, you may be enlightened.
The Shrimp Meditation Project. Trillions of shrimp are possibly suffering horribly every year (we know this because they aren't cute, so it's an especially strong countersignal to care about them, plus Math.) If we can teach them how to meditate, we can alleviate their suffering. I propose a large scale operation where we breed shrimp to be more capable of meditation, then sell the best bloodlines to the factory farms along with an AI program to automate training them in meditation.
What came first in moral primacy, the suffering or the preferences? This is a long-running dispute in ethics. Much of the data that people interpret as morally bad involves people suffering as well as having a preference not to suffer. This creates a classification issue, where it's unclear whether the moral badness comes from the suffering, or from the violated preferences. One can ponder some thought experiments to try to distinguish the two, but those have their own circularity issues. Consider someone who feels the physical sensations of suffering, but lacks the preference not to suffer, or perhaps actively prefers to suffer. (As an example, take an enlightened being, such as myself, that has transcended desire.) It may seem intuitive that such a person's suffering is not morally bad. And yet, the same experiment can be run in converse. Imagine people that have all the normal preferences - they like to eat ice cream and dislike getting cut off in traffic. But their physical sensations are inverted - every time a preference of theirs is met, they subjectively feel psychological pain, and they feel pleasure when their preferences are frustrated. Now, it might seem intuitive that it would be morally bad to cause them pain. But realize that this situation is similar to the first - in both, preferences don't match feelings. But in the first, the person's preferences are modified and feelings unchanged, while the second's feelings have been changed and their preferences are unchanged. The two hypotheses are dual - one says that suffering is bad because people prefer not to suffer, while the other says that frustrating preferences is bad because it causes people subjective pain. Both seem correct, yet they result in a recursive circular justification. It seems that in our thought experiments, the intuitive morality follows the side that was edited, perhaps because it is in our differences from other people that we conceptualize our identity.
On skepticism of self-reports. There are many reports of people experiencing pain. One poster on the EA Forum (Reminding myself just how awful pain can get (plus an experiment)) deliberately exposed themselves to extreme pain, and that caused them to update towards pain, particularly extreme pain, being an urgent and important cause area. This should cause us suspicion. Suppose someone injects themselves with heroin, then reports that they had the best experience of their lives, and starts devoting their life to acquiring more and more heroin (many such cases!) We don't conclude that the utility of heroin must be super high, and work to reorganize our society to support as many creatures on heroin as possible (presumably, rats.) Rather, we conclude that the heroin has hijacked their mind. But we tend to accept self-reports of extreme suffering quite differently than self-reports of extreme pleasure. When those with cluster headaches report they are a 10/10 on a suffering scale, we dutifully do math on it. But if an addict reported they were a 10/10 on a pleasure scale, we react with high skepticism. We should be equally skeptical of both sides of the scale. Extreme pain hijacks our minds to ascribe huge importance to alleviating it, in the same way that some forms of extreme pleasure create addicts. Someone under either extreme has degraded epistemic abilities, lowering the reliability of their self-reports and their later memories. (Isn't it strange how memories of a painful episode are disproportionately influenced by the ending (the peak–end rule?) Although to hedge our bets, we should work on teaching cluster headache sufferers how to meditate.
[This is an entry for lsusr’s write-like-lsusr competition.]
I meditated on suffering and came up with several half-baked ideas. Their apparent contradictions with each other and with my previous writings is a feature, not a bug; if you meditate on them, you may be enlightened.
The Shrimp Meditation Project. Trillions of shrimp are possibly suffering horribly every year (we know this because they aren't cute, so it's an especially strong countersignal to care about them, plus Math.) If we can teach them how to meditate, we can alleviate their suffering. I propose a large scale operation where we breed shrimp to be more capable of meditation, then sell the best bloodlines to the factory farms along with an AI program to automate training them in meditation.
What came first in moral primacy, the suffering or the preferences? This is a long-running dispute in ethics. Much of the data that people interpret as morally bad involves people suffering as well as having a preference not to suffer. This creates a classification issue, where it's unclear whether the moral badness comes from the suffering, or from the violated preferences. One can ponder some thought experiments to try to distinguish the two, but those have their own circularity issues. Consider someone who feels the physical sensations of suffering, but lacks the preference not to suffer, or perhaps actively prefers to suffer. (As an example, take an enlightened being, such as myself, that has transcended desire.) It may seem intuitive that such a person's suffering is not morally bad. And yet, the same experiment can be run in converse. Imagine people that have all the normal preferences - they like to eat ice cream and dislike getting cut off in traffic. But their physical sensations are inverted - every time a preference of theirs is met, they subjectively feel psychological pain, and they feel pleasure when their preferences are frustrated. Now, it might seem intuitive that it would be morally bad to cause them pain. But realize that this situation is similar to the first - in both, preferences don't match feelings. But in the first, the person's preferences are modified and feelings unchanged, while the second's feelings have been changed and their preferences are unchanged. The two hypotheses are dual - one says that suffering is bad because people prefer not to suffer, while the other says that frustrating preferences is bad because it causes people subjective pain. Both seem correct, yet they result in a recursive circular justification. It seems that in our thought experiments, the intuitive morality follows the side that was edited, perhaps because it is in our differences from other people that we conceptualize our identity.
On skepticism of self-reports. There are many reports of people experiencing pain. One poster on the EA Forum (Reminding myself just how awful pain can get (plus an experiment)) deliberately exposed themselves to extreme pain, and that caused them to update towards pain, particularly extreme pain, being an urgent and important cause area. This should cause us suspicion. Suppose someone injects themselves with heroin, then reports that they had the best experience of their lives, and starts devoting their life to acquiring more and more heroin (many such cases!) We don't conclude that the utility of heroin must be super high, and work to reorganize our society to support as many creatures on heroin as possible (presumably, rats.) Rather, we conclude that the heroin has hijacked their mind. But we tend to accept self-reports of extreme suffering quite differently than self-reports of extreme pleasure. When those with cluster headaches report they are a 10/10 on a suffering scale, we dutifully do math on it. But if an addict reported they were a 10/10 on a pleasure scale, we react with high skepticism. We should be equally skeptical of both sides of the scale. Extreme pain hijacks our minds to ascribe huge importance to alleviating it, in the same way that some forms of extreme pleasure create addicts. Someone under either extreme has degraded epistemic abilities, lowering the reliability of their self-reports and their later memories. (Isn't it strange how memories of a painful episode are disproportionately influenced by the ending (the peak–end rule?) Although to hedge our bets, we should work on teaching cluster headache sufferers how to meditate.