There are lots of conceivable ways one might do enormous evil. Thus, the odds that we’d manage to avoid doing evil in any of those ways are fairly low. If there were just one potential atrocity, it wouldn’t be too hard to avoid it. But when there are many different ones, even if there’s a 90% chance of avoiding any particular one, the odds we’d avoid them all are low.
That depends on the probability you assign to each one. This is just another argument that takes advantage of 1) the fact that people are bad at estimating small probabilities in the first place, and 2) scope insensitivity leading people to not make smaller probability estimates for things that are less likely and more specific. (And the number 90% is ridiculous anyway.)
If you name 12 possible sources of evil, the probability for each one shouldn't sum up to more than the total probability of evil. It should not be possible to get a large total by naming 12 cases, asking for probabilities for each case, and combining them. The fact that you can do this doesn't mean that the probability of evil is high, it means that people are bad with numbers.
The OP's author didn't claim that the list of twelve sources is EXHAUSTIVE. What the OP claims is that one needs to do a broader search of possible evils.[1] In addition, if we'd like to have a 50% probability that among the 12 sources there is an actual evil, and the sources were independent, then we would need to have an average probability of 94-96%, not even 90%.
My take is that ideas like outsourcing most economically useful work to the AIs are themselves evil, since they allow disempowerment or degradation of humans' psyches.
What I said had nothing to do with the specific number 12, except that that's what the original post used. My point was that you can't make a list of a lot of evils (regardless of the exact number) and assign percentages and multiply them together. When asked to assign percentages for unlikely events, people assign them in a logically inconsistent way. The entire argument consists of abusing this inconsistency in order to make the chance of evil look high.
If nobody perceives this evil, then it does not exist. If anything, bringing attention to something which is happening, and then deeming it evil, would increase the total suffering of the world for making people suffer from something that they would otherwise have ignored.
Society needs every cog of the system, it needs everything from simple jobs to highly respected jobs. Every part is important. Suffering starts when you start telling half the population "Your position is low, this makes you a loser, this makes you a failure, and it makes you worth less". Suffering is not a function of objective states, but of their interpretation. Halving the net suffering of the world by changing our perceptions of it is trivial compared to making everyones lives better in an objective sense.
There's also no absolute good or evil. Things which are "somewhat evil" might be protecting against bigger evils. Two examples: Parents force their children to do homework, and this is painful, but it's also for their own benefit and acts to prevent future evils. "Helicopter parenting" is virtue gone wrong, as one does harm in the attempt to minimize harm.
I believe that many ongoing "evils" are chesterton's fences. Perhaps socities which did not engage in evils do not exist anymore because they destroyed themselves with their virtues. Everything good is costly, after all. This is not a problem to be solved, it's life. If one finds life to be a problem, then their philosophy is wrong - their map says that the territory needs to be different for life to be enjoyable, and this interpretation harms their enjoyment.
Lastly, our moral reflections are worth nothing. Plenty of large companies are making the world a worse place because it's profitable for them, we don't need any more reflection to conclude this. But the conclusion has no power as the rich have succeeded in subverting the values of society towards consumerism and the like. And only humans working in companies can be moral, but individuals only have freedom of choice in smaller companies. In larger companies, everyone is just following orders, or some law which forces them to maximize shareholder profit, which we know conflicts with moral principles (this is the problem with Moloch).
What personally bothers me is the lack of humanity in the modern world, not the evil. To be precise, the modern world is not that evil (malicious), it's completely indifferent. Slavery, genocides and conquests would be an improvement (a step towards humanity). The suffering is not a problem, those who grow up in harsher times will create more solid mental defenses and more stoic interpretations of life, so the average level of suffering (or rather, net suffering) is unlikely to change very much. But like farm and zoo animals, we won't get to experience the authentic life that our forefathers were capable of experiencing.
Good and evil are naive concepts which break down once you start thinking about them and questioning them. Moral relativism is not one of many valid views, it's a logical conclusion.
The post criticizes how every age believes that they've figured out what's good, even though they're clearly flawed from the perspective of other ages. But the same thing is true when moralizers decide that "X is obviously bad and we all agree" because X feels bad, despite a complete lack of effort to challenge this belief. Morality is like religion in that it inhibits thought, and I think they're both cultural immune systems against various issues. We shouldn't do away with morality, but morality is too naive, and the road to hell is paved with good intentions.
Morality is mostly poor assumptions like "X is bad", and the amount of effort which goes into the evaluation usually amounts to "yep, X makes me feel bad, case closed". If discrimination is bad, we'll have to do away with exams and drivers licenses. I think we need to look at the second or third-order effects of anything in order to even begin judging if it's good or bad. You cannot simply stop at the first step and not feel responsible because your life choices only lead to death further down the chain of cause and effect (e.g. veganism also requires the death of animals, just less directly)
To be brief, there are no good or bad things that one ought to maximize or minimize, there's only trade-offs to make and balances to find. Nothing is purely good/virtuous or bad/evil, these terms cannot be decoupled from context.
But it's true that systems cannot properly evaluate themselves from the inside. It's only when you have an external reference point that you can do so. In 100 years, we can look back at 2025, and then we may discover that we deem our current society to have had moral catastrophes. But there's no one true reference frame
Crosspost of this blog post.
I mostly believe in the possibility of an ongoing moral catastrophe because I believe in the actuality of an ongoing moral catastrophe (e.g. I think the giant animal torture facilities that produce nearly all of our meat qualify). But Evan Williams has a great paper called The Possibility of an Ongoing Moral Catastrophe that argues that everyone should think it pretty likely that their society is engaged in horrendous evil. It’s easy to look back at those benighted fools who owned slaves. But probably we are doing something comparable.
Why think this?
Williams gives two main arguments for it: a disjunctive argument and an inductive argument. The inductive argument: almost every society in human history has engaged in uncontroversial acts of horrendous evil. Most owned slaves. Brutal conquest was routine. Women and children were repressed and subjugated. Even the U.S. just a few decades ago tolerated naked displays of racism, prohibiting black children from using the same drinking fountains as white children.
These people have mostly been unaware that what they were doing was wrong. Antebellum slave-owners and conquering Vikings saw no moral problem with their behavior. Are we uniquely civilized? As the Romans owned slaves and conquered foreign lands, they too thought they were uniquely civilized and moral. So did the ancient Greeks as they left infants to freeze to death and the British imperialists as they starved millions in India. Williams puts the core point very well:
I think it is probable that we have serious blind spots. After all, just about every other society in history has had them. Show me one society, other than our own, that did not engage in systematic and oppressive discrimination on the basis of race, gender, religion, parentage, or other irrelevancy, that did not launch unnecessary wars or generally treat foreigners as a resource to be mercilessly exploited, and that did not sanction the torturing of criminals, witnesses, and/or POWs as a matter of course. I doubt that there is even one; certainly there are not many.
Or, in one sentence: almost every society in history has done stuff that is uncontroversially evil—probably we are doing great evil too.
The mere fact that it doesn’t look to us like we’re doing great evil doesn’t tell us much. Societies rarely see the horrors they carry out, and eliminating them is typically outside the Overton window. In the first several thousand years of civilization, we don’t have a single record of anyone proposing the abolition of slavery. Moral catastrophes are like breath: you never smell your own.
The second argument is disjunctive. There are lots of conceivable ways one might do enormous evil. Thus, the odds that we’d manage to avoid doing evil in any of those ways are fairly low. If there were just one potential atrocity, it wouldn’t be too hard to avoid it. But when there are many different ones, even if there’s a 90% chance of avoiding any particular one, the odds we’d avoid them all are low.
All sorts of errors could lead to a catastrophe. If we’re wrong about which beings matter, then probably we’ll neglect the interests of many morally important entities. If we’re wrong about which actions cause harm, this too could lead to catastrophe. Specific examples of possible moral catastrophes include:
Some of these I believe are genuine atrocities, others I don’t. But given that there are so many—and this list is non-exhaustive—the odds we’d get everything right, even though we generally don’t do much moral reflection before taking hugely consequential actions, are low. The world did not reflect before building the first factory farm. If the world doesn’t do much moral reflection before taking significant and irreversible actions, then what are the odds we’d be right about everything morally?
What are the takeaways from this?
A first one is that the world should be awake to this very serious moral possibility. If we might be, as a society, doing evil on the order of owning slaves, that merits extremely careful reflection. In a slave-owning society, ending slavery would be much more important than whichever random political issues dominate the newspapers. Ending our society’s atrocities is similarly a much bigger deal than the comparatively unimportant issues that we tend to discuss.
This is a reason for students to take philosophy classes in school. It’s a reason for society to broadly reflect on the atrocities we might be committing. If you can’t point to a specific atrocity that we’re committing—likely one outside the political Overton window—probably that isn’t because we’re not committing any. It’s instead because you can’t see the ones we are committing.
It’s also a reason to take seriously precautionary principle reasoning. My best guess is that abortion isn’t murder. But if there were some way to majorly reduce the number of abortions at low cost, I’d strongly support that. Even a 5% chance that society is butchering millions of babies is cause for alarm.
I wish meat-eaters would treat the issue of factory farming with the same kind of seriousness, rather than the all-too-common blasé attitude of mockery and scorn. If your response to someone suggesting that eating meat is morally horrendous is to say “haha, I’m going to eat twice as much meat to offset your impact,” then that is an extreme failure to take morality seriously. One who behaves in such a way has revealed themself not to be a serious person.
There is a common throughline in past atrocities. Most of them have come from a moral circle that is too limited. Slavery and conquest were only tolerated because the victims’ interests were neglected. Reflection on our past errors should lead us to expand our moral circle and include the interests of every being who has interests. If other societies did evil because they discounted the interests of morally important sentient beings, we should think we probably are as well.
Lastly—and this one could turn out to be the most important one long-term—we should take the time to carefully reflect before taking hugely consequential actions. In a better world, there would have been careful reflection before building the first factory farm. We may be in the process of creating ungodly numbers of morally significant digital minds—with interests. We shouldn’t do this blind and let the fate of trillions be subject to the whims of profit-maximizing AI companies.
Similarly, before we allow a small number of powerful people to dictate how space resources are used, we ought to have a lengthy period of moral reflection. If we’re dictating the fate of the universe—making decisions that could reverberate for billions of years—we had better take the time to reflect carefully before proceeding. Otherwise, most value could be lost.
It’s easy to remark on the moral errors of past societies. But if we take seriously Williams’ arguments, then we must face up to the fact that we might be just as bad. Taking that seriously—taking seriously that we may be like slavers, genocidaires, and conquerors—ought to change how we see the world and make us less likely to carry out acts of unspeakable evil. The first step in dealing with a problem, after all, is recognizing its existence. In this case, the problem we face might eliminate most of what is good about the entire future, and it is time the world woke up.