Thanks. Side note (I posted another comment about this just now, because it just clicked for me this morning): I think Ben Hoffman thinks (or thought, when he wrote his blog posts) that when you treat more malaria cases or do other philanthropy, marginal cost goes down. He says:
If we assume that all of this is treatable at current cost per life saved numbers - the most generous possible assumption for the claim that there's a funding gap
When in fact it's the least generous, under the assumption that marginal cost goes up. If you think marginal costs will only go down from current levels as we scale, then it is indeed suspicious that nobody's decided to just dump all their money into scaling.
I think I've found a crux that makes things make sense today, that didn't make sense to me yesterday as I was reading the first linked blog post. When trying to think about the existence or nonexistence of a funding gap, Ben says:
If we assume that all of this is treatable at current cost per life saved numbers - the most generous possible assumption for the claim that there's a funding gap
And my brain skipped a beat, and went "no, the opposite, that's the least generous possible assumption. As we treat more, the next treatment becomes more expensive, not less. Maybe that's what he meant to say?" And then the rest of the blog post just seemed sorta wrong. I was like "maybe he doesn't get that this is marginal cost?". But that's not it, I realized as I was laying in bed thinking about this.
If you've been steeped enough in standard economics which talks about producing widgets, your mental association between "marginal cost" and "as you produce more, marginal cost goes down" is strong. Because usually, there are economies of scale which mean that's what happens. From memory, I recall seeing something that said that every time the number of solar panels produced doubles, the cost per panel goes down by 20%, for example.
If you're thinking "$5,000 is the current marginal cost, and therefore the most saving a life with a malaria net will ever cost, the more life-saving we do, the less it will cost", then everything else he says follows. The right strategy for someone with a lot of money would be to scale this intervention to the max and drive costs down. And the fact that billionaire philanthropists aren't doing this is suspicious, suggests that they don't really believe the conclusions Givewell has put out. EDIT TO ADD: And the fact that marginal costs have been going up over time as we do more philanthropy, rather than being exactly what you would expect to happen, suggests that Givewell got its initial estimates wrong and what's happening is as we learn more, we learn that everything is more expensive than we thought. There are not in fact any cheap drowning children to save, and never were.
If instead you think of the marginal cost as "the cost to pick the lowest hanging fruit, will go up when we pick that fruit" then you reach different conclusions.
An analogy:
We are in the "you can save a drowning child for an affordable price" world. In this world (or a hypothetical one for the purpose of this analogy), 1,000 infants are being dumped in a large lake per day. Some of them are right by the shore, easy to get to like the drowning child thought experiment postulates, some are out in deeper water. I'm a strong swimmer, and could save any of those infants, but I can't save all of them by myself, and if I try to save as many as I can today, I will exhaust and potentially injure myself, meaning I can save fewer tomorrow. I estimate I could save 100 today if I put forth all the effort I can, saving the ones that are easier to reach first. But there are some people who aren't as good at swimming as I am, so maybe I should swim further out, save 10, and let the 90 I chose not to save that I could have, hopefully be taken up by others. But in that case, I'm still exhausted, and can do less tomorrow. So maybe I save 5, every day, a mix of easy ones and harder ones to demonstrate what's possible and encourage others to join in, and spend some of my energy and time trying to figure out how to solve the underlying problem here, rather than just pulling people out of the lake. And then someone says "looks like it's not true that it's easy to save a drowning child, if it was true this guy would have saved more lives". And what it looks like to "go hard" at this problem is not obvious, but "save a few, leave room for others, try and solve the underlying problem" is one plausible strategy. And because I can't save them all "I need more people to help with this" is a thing I should say to anyone who will listen.
It seems like 3 things are simultaneously true:
1) It's not possible to eradicate malaria for $5,000/life saved (that's the marginal cost, approximately, ballpark estimate with lots of wiggle room in the number). This generalizes to all other currently known interventions to make people's lives better at low cost - it's relatively cheap now, but one should expect that saving the last life that would otherwise have died from malaria, or helping the last person who can be helped with some other intervention that is currently near the best marginal cost, will cost a lot more than $5,000. I feel like Givewell et. al. are clear about this, or at least this is the understanding I came away with when reading their stuff, rather than being a surprise I discovered from other sources.
2) If we did assume that we could save all the savable lives for $5,000 apiece per year (again, this is false, but as a way of ballparking how much is definitely not enough to solve all problems), taking the number of 10 million deaths per year of communicable disease (referenced in Ben's blog post, and noting that "communicable disease" and "all problems" are not the same) at face value, we get $50 billion per year. As I understand it, the amount of philanthropic wealth to eventually give away in places like the Gates Foundation and Good Ventures and affiliates is on the order of tens to hundreds of billions of dollars, not trillions. So spending $50 billion/year is not sustainable, and even at the marginal cost of $5,000 per life saved, there's still going to be a long term funding gap.
3) Having recently done a deep dive into Givewell's recommendations for a presentation to a local community group, I notice that their recommendations and methodologies are still developing. The basic ideas like "there are opportunities in poor countries that don't exist in rich countries" and "malaria is bad, and helping with it on the margin is cheap" have stayed the same, but they flag lots of places where their ways of analyzing things have changed over the past 5-10 years, and generically for any page that hasn't been updated in a few years, they put a disclaimer at the top that the analysis may not reflect the present situation or Givewell's best understanding. Also new cause areas in EA are coming up every few years, and there are a lot of "more research needed" tags around most conclusions. My takeaway is that we know a lot more now about how to use money to do good, than we did 10 years ago, and I expect that 10 years from now, we'll have similarly improved our knowledge-base, and may have uncovered better opportunities than are currently known. So it makes sense to hold some money in reserve pending that further research.
With this as background, if I try to put myself in the shoes of someone who has $10 billion and wants to do maximally good things with it, it makes sense not to spend it all immediately on malaria, even though that would save a bunch of lives. And it makes sense not to fund the crap out of all the marginally most effective interventions until all my money is gone this year, driving the marginal cost of various outcomes up to some multiple of its current level which makes it so everyone else feels like contributing to this effort is some multiple less useful than it was last year - and then my money runs out, all the orgs who got a bump in funding are scrambling for next year's funds, the rest of the philanthropic community feels less interested, and chaos ensues. Clearly a suboptimal approach.
Just out of curiosity, I googled how much has been spent on Polio (almost, but not quite, eradicated), and estimates are on the order of $20 billion over the past 37 years (not adjusted for inflation) with the 2022-2029 budget being $7 billion (so the last few cases are going to be super expensive). So I would be unsurprised if solving all communicable disease had a cost on the order of hundreds of billions to trillions of dollars, and couldn't be done tomorrow or this year even if I had $100 trillion to spend on it.
This seems to be a core of your argument:
The big weird thing is that it seems like difficulties were found in the early picture of how much good was in fact being done thru these avenues, and this was quietly elided, and more research wasn't being done to get to the bottom of the question, and there's also various indicators that EA orgs themselves didn't really believe their numbers for how much good could done.
...
Givewell advising Open Phil to not fully fund things is the main "it seems like the parties upstream of the main message don't buy their main message enough to Go Hard at it".
Whereas to me, I don't see evidence that difficulties were found in the cost effectiveness estimates (it was never claimed that marginal cost could just be multiplied by number of cases to get total cost) that advice from GiveWell makes sense, doesn't seem weird, and doesn't seem like the parties upstream of the main message don't buy it. "Going hard" with $10-100 billion, when the total cost to solve all extreme problems is going to be trillions of dollars and tens of years at least and we're constantly learning as we go, looks (to me) like approximately what the 10-billionaire philanthropists are doing. At least, a better strategy, if Givewell's recommendations are basically sound, doesn't immediately occur to me.
If most failures of rationality are adaptively self-serving motivated reasoning
I would say that most failures of rationality were adaptive in the ancestral environments, but I wouldn't say they all count as "motivated reasoning".
Simple example: Seeing a snake in the grass, and responding as if there is a snake in the grass, in the presence of ambiguous stimuli that have only a 10% chance of being a snake, could well result in more surviving offspring than a more nuanced, likely slower, and closer-to-correct estimation of the probability there is a snake. But this is not a result of motivated reasoning where someone is advocating for their interests, it's just a hack that our evolved brains have for keeping us alive using minimal energy and time for computation because calories were scarce and snakes sometimes moved quickly.
My understanding is that many failures of rationality are adaptive in this way - trading off getting the right answer (in terms of the answer that will cause there to be most offspring in future generations, not in terms of the answer that would count as "winning" by the lights of the human involved, necessarily - evolution doesn't care a whit about whether I feel like I've won or lost or advanced what I see as my interests, as a result of something my brain's biased towards or away from) against energy and time costs. One thing that could be different now is, the situations where we will starve are fewer, and the time we have to think before deciding what to do is often more.
Motivated reasoning is a specific relatively small subset of the biases our brains are subject to, not the main handle for biases in general, I think?
The key intuition about the future might be simply that humans being around is an incredibly weird state of affairs. We shouldn't expect it to continue by default.
I mean, yes this seems right. In which case, taking it as a premise that this weird state doesn't last long, it follows that there's no point trying to plan for a future where human-like things continue to exist. BUT: from where we stand right now, we do actually have some control over whether everybody dies and nothing human-like continues into the future. The simplest plan to avoid extinction by AI is "don't build the thing that kills us", but there are more sophisticated options too. As unlikely as it was for such a situation to arise in the first place, as weird as it is to be here, here we are. And we can try to aim, from here, for a future state that is vanishingly unlikely to happen by chance or by default, such as "not human extinction".
I think the speculation about owning galaxies starts from the assumption that we succeeded in aiming the future in such a direction. And although that assumption may not be what actually happens, it would be unfortunate to get to that future state and then not have thought through what to do next because we didn't think it was likely so we never planned for the possibility.
The whole thing people are doing when they're talking about good futures and how to get there, is a process of trying to design a path towards an unlikely future that is emphatically not the default outcome without humans trying to make it happen.
How rare good people are depends heavily on how high your bar for qualifying as a good person is. Many forms of good-person behaviour are common, some are rare. A person who has never done anything they later felt guilty about (who has a functioning conscience) is exceedingly rare. In my personal experience, I have found people to vary on a spectrum from "kind of bad and selfish quite often, but feels bad about it when they think about it and is good to people sometimes" to "consistently good, altruistic and honest, but not perfect, may still let you down on occasion", with rare exceptions falling outside this range.
Also, if it is true that a lot of people are confused by good and courageous people, I am unclear where the confusion comes from. Good behaviour gets rewarded from childhood, and bad behaviour gets punished. Not perfectly, of course, and in some places and times very imperfectly indeed, but being seen as a good person by your community's definition of "good" has many social rewards, we're social creatures... I am unclear where the mystery is.
Were the confused people raised by wolvesnon-social animals?
I don't actually buy the premise that a lot of people are confused by moral courage, on reflection.
This doesn't match my experience of what good people are generally like. I find them to be often happy to do what they are doing, rather than extremely afraid of not doing it, as I imagine would be the case if their reasons for behaving as they do were related to avoidance of pain.
There are of course exceptions. But if thinking I had done the wrong thing was extremely painful to me, literally "1000x more than any physical pain" I predict I'd quite possibly land on the strategy "avoid thinking about matters of right and wrong, so as to reliably avoid finding out I'd done wrong." A nihilistic worldview where nothing was right or wrong and everything I might do is fine, would be quite appealing. Also, since one can't change the past, any discovery that I'd done something wrong in the past would be an unfixable, permanent source of extreme pain for the rest of my life. In that situation, I'd probably rationalize the past behaviour as somehow being good, actually, in order to make the pain stop... which does not pattern-match to being a good person long term, but rather the opposite, being someone who is pathologically unable to admit fault, and has a large bag of tricks to avoid blame.
Retracted, because after conversation on Ben's blog, it's not a matter of thinking marginal costs will go down with scale - rather, he treats "funding gap" more strictly than I would, something like "the amount of money charities can absorb and deploy at approximately present marginal costs per life saved". And if you define the funding gap as "the amount of money that can be deployed at approximately present marginal costs, or better", then "assume marginal cost does not go down at all" is indeed the most generous assumption.