dollar in my DAF is approximately as good as a dollar in my normal bank account for making the world a better place.
Well, one of them has reduced your tax burden when you deposted it. The comparison should be $DAF ~= $(cash - taxes).
Also, there's some controversy about whether political spending is actually altruistic. I tend to lean toward being restrictive in my giving - not even most registered charitable organizations make my cut for making the world better, and almost no political causes.
All-or-nothing, black-or-white thinking does not serve well for most decisions. Integrals of value-per-time-unit is a much better expected-value methodology.
What difference does it make whether I die in 60 years or in 10,000? In the end, I’ll still be dead.
What difference does it make whether you die this afternoon or in 60 years? If life has value to you, then longer life (at an acceptible quality level) is more valuable.
[note, not a utilitarian, but I strive to be effective, and I'm somewhat altruistic. I don't speak for any movement or group.]
Effective Altruism strives for the maximization of objective value, not emotional sentiment.
What? There's no such thing as objective value. EA strives for maximization of MEASURABLE and SPECIFIED value(s), but the value dimensions need not be (I'd argue CAN not be) objectively chosen.
I don't see much disagreement. My comment was intended to generalize, not to contradict. Other comments seem like refinements or clarifications, rather than a rejection of the underlying thesis.
One could quibble about categorization of people into "bad" and "nice", but anything more specific gets a lot less punchy.
Put another way: everyone underestimates variance.
Implicit in this argument is that the path of human culture and the long-term impact of your philanthropy is sub-exponential. Why would that be so? If there's no way to donate NOW to things that will bloom and increase in impact over time, why would you expect that to be different in 50 years? If you prioritize much larger impact when you're dead over immediate impact during your life, you should find causes that match your profile.
"people who would reflectively endorse giving control over the future to AI systems right now".
"right now" is doing a lot of work there. I don't believe there are any (ok, modulo the lizardman constant) who reflectively want any likely singularity-level change right now. I do believe there is a very wide range of believable far-mode preferences and acceptances of the trajectory of intelligent life beyond the immediate.
Neither upvoted nor downvoted - I'm happy that you're thinking about these topics, but I don't think this goes deep enough to be useful. It tries to use word definitions incorrectly to prove things that aren't true.
All world leaders want to do good things. Their values is to do the most good. They just disagree on what the most good is!
Nope. All humans (including leaders) want many conflicting things, which they then try to justify as "good". The label "good" is following, not leading, their desires.
- All wars are because one side thinks X is good, another side thinks X is bad, and both sides are willing to fight for what they believe in to stop X or to keep X.
Perhaps, but see above. "good" is poorly-defined and "good for me and my people" is not even theoretically compatible among different entities.
- All cooperation is because two sides think X is good/moral, so they work together to get X!
Otherwise, one side wouldn’t want X, and they wouldn’t both work to get it.
Not at all. A LOT of cooperatoin and trade is because two sides think they're better off, without any agreement that either result is "good". Or maybe true, but only if you define "good" as "what each trader in an agreement wants".
- And all bad decisions are because someone’s goals/values led them to think “I should do this bad thing.”
I can't tell if you're saying "all decisions are because someone's goals/values led them to think "I should do this thing", or if you're saying decisions to pursue bad things (to some) are in this category. This is either incorrect or tautological.
There's also a saying of "don't try to teach a pig to sing - it wastes your time and annoys the pig". It seems like you could investigate the porcine valence correlation using similar methods.
I don’t think EA is a trademarked or protected term (I could be wrong). I’m definitely the wrong person to decide what qualifies.
For myself, I do give a lot of support to local (city, state mostly) short-term (less than a decade, say) causes. It’s entirely up to each of us how to split our efforts among all the changes in our future lightcone we try to improve.