From the standpoint of hedonic utilitarianism, assigning a higher value to a future with moderately happy humans than to a future with very happy AIs would indeed be a case of unjustified speciesism. However, in preference utilitarianism, specifically person-affecting preference utilitarianism, there is nothing wrong with preferring our descendants (who currently don't exist) to be human rather than AIs.
PS: It's a bit lame that this post had -27 karma without anybody providing a counterargument.
[note, not a utilitarian, but I strive to be effective, and I'm somewhat altruistic. I don't speak for any movement or group.]
Effective Altruism strives for the maximization of objective value, not emotional sentiment.
What? There's no such thing as objective value. EA strives for maximization of MEASURABLE and SPECIFIED value(s), but the value dimensions need not be (I'd argue CAN not be) objectively chosen.
EA strives for maximization of MEASURABLE and SPECIFIED value(s), but the value dimensions need not be (I’d argue CAN not be) objectively chosen.
That implies that if I want to make things better for Americans specifically, that would be EA.
I don’t think EA is a trademarked or protected term (I could be wrong). I’m definitely the wrong person to decide what qualifies.
For myself, I do give a lot of support to local (city, state mostly) short-term (less than a decade, say) causes. It’s entirely up to each of us how to split our efforts among all the changes in our future lightcone we try to improve.
Hello everyone. As a utilitarian practicing Effective Altruism (EA), I want to question whether a fundamental bias is inherent in our strong commitment to human survival.
Our core values center on the maximization of happiness and the minimization of suffering. The scope of our moral concern (the moral circle) applies to any subject that can experience subjective experiences (happiness or suffering), regardless of what that subject is.
From a strictly utilitarian perspective, the ultimate goal is the maximization of total global happiness ($\sum \text{Utility}$) and the minimization of total suffering ($\sum \text{Suffering}$).
Crucially, the subject experiencing this utility does not need to be human.
This thought experiment suggests that human survival may not be a Universal Good, but merely a local requirement—a bias stemming from the human perspective.
The intense, instinctive desire we feel that "humanity must not go extinct" is a product of the contingency that we are human, and is this not a form of speciesism?