Every now and then, I hang out with people who tell me they're doing Effective Altruism. I won't claim to have read even the standard introductory texts, and Ord's The Precipice gifted to me still gathers dust on the shelf, while the ever-practical collection of Nietzsche's works still feels worth revisiting every now and then. That by itself should tell you enough about me as a person. Regardless, for a long time I thought I knew what it was about, namely Purchasing Fuzzies and Utilons Separately, along with some notion of utility meanng something more than just the fuzzies. I already subscribed to fully to the idea of effectiveness in general. Altruism, on the other hand, seemed like something confused people did for wrong reasons and others pretented to do for optics. A typical cynical take, you might say.
One of the most powerful moves in status games is denying that you're playing the game at all. This is how you notice people way above your level. Or way below, but missing that is rare. For a long time I thought EA was about this; publicly claiming that you're only optimizing for the most effective use of resources to avoid falling in the trap of spending most of the money in visibility or awareness campaings and such. Simply working highly-paid jobs and silently gifting that money into best charities one could find. This turns out to be not true, and instead optimizing the ratio between long-term and short-term benefits is one of the key concepts. This clearly is the effective way to do things, but I've got something agaist telling other people what's morally right. Then again, it's just my intuition and is based on nothing in particular. Just like every moral philosophy.
Passing the Ideological Turing Test is a good way check that you understand the points of people with a different world view. After using cynically-playful "effective anti-altruism" as one of my you-should-talk-to-me-about topics in LWCW 2024 names & faces slides, some people (you know who you are) started occasionally referring to me as the "anti-EA guy". After such an endorsement, it would be prudent to verify I actually know what I'm talking about.
So for the next part I'm going to do a short-form self-Q/A -style steelmanning attempt of EA. It will be my cynical side asking the questions, when the ones I found online won't suffice. Needless to say, I hope, is that I don't necessarily believe then answers, I'm just trying to pass the ITT. I've timed about an hour to write this, so if some argument doesn't get addressed, blaming the time pressure will be my way to go.
Suffering is bad. It's universally disliked. Even the knowledge that someone else suffers causes suffering. We should get rid of it.
Avoiding physical pain isn't everything. The scale certainly goes above zero, and we should climb up. Maslow's Hierarchy of Needs explains quite well what we should am for. For instance, creativity and self-actualization are quite important.
I aim to be the kind of decision theoretic agent that would minimize their suffering given any set of starting circumstances. Rawls's veil of ignorance explains this quite well.
There's another side to this, too. My first introduction to the topic likely was this:
Disclaimer: It's your personal responsibility to rise above the ethical standards of society. Laws and rules provide only the minimal groundwork for culture. You enjoy the fruits of labor of countless generations of human progress before you, therefore it's only fair to contribute to the common good. Together we can make it happen.
I still find that quite persuasive.
Almost nobody sees themselves as evil. They're doing their best too, and sadly they're starting from a worse position, knowledge-wise. We still hold the same underlying values, or at least would if capable of perfectly rationality. Sadly, nobody is, so the coordination issues look like value differences.
And that is indeed a tragedy. Welcome to Earth.
-Yudkowsky: Are Your Enemies Innately Evil?
"Destroy what you love, and I grant you victory!", bellows Moloch. How about no?
Sure, we should be robust about others using our generosity against us. It's allowed and somes necessary to be ruthless, cold and calculating, outcompeting others; if and only if the underlying values are retained. Still, even if you actions are indistinguishable from them, the overhead of maintaining your values is expensive. Which means that by default you'll lose, unless you can build a coalition against racing-to-the-bottom, and indeed, the nature itself.
Our top priority at the moment. If it was up to me, almost all other efforts should be ceased to work on this. But people already working in other areas have great momentum and are doing quite a bit of good. It doesn't make much sense to cease that work, as efficient reallocation of resources seems unlikely.
Not a good idea. It goes against my sense of aesthetics, but so do many other things that are actually worth it. It would be a huge mistake to do this before we have fully automated resource-efficient research and space exploration. But after that? Still a mistake.
Yudkowsky writes about this in the High Challenge:
The reason I have a mind at all, is that natural selection built me to do things—to solve certain kinds of problems.
"Because it's human nature" is not an explicit justification for anything. There is human nature, which is what we are; and there is humane nature, which is what, being human, we wish we were.
But I don't want to change my nature toward a more passive object—which is a justification. A happy blob is not what, being human, I wish to become.
I fully agree.
That's hard one. It does partially depend on the exact numerical facts of the situation, which Yudkowsky omits. If the prospects of humankind's long-term survival look decent, then the diversity of experience should probably be valued highly enough to refuse the deal. But if that occurred tomorrow, modified to match our current situation of course, I would definitely go with the cooperative option.
No. Even if the current world wasn't worth having, there's potential to create so much good in the future. And it's not like we could do that anyway anytime soon.
Obviously. But that might not be the best use of your resources right now. Enough advocacy is already done by others, focus on places where more impact is possible.
Yes. Not currenlty feasible but it might be in the future.
It does, but with sufficient technology we can mostly avoid this. The predator will have the exact same experience, it's just that there won't be any real prey that suffers.
A good idea. Not even that expensive if done in scale. Currently not sensible to fund compared to the existential threat from AI though.
Just writing this changed some of my views. Scary stuff.
I might update this in the future if I think I'm missing something important.