Posts

Sorted by New

Wiki Contributions

Comments

trurl3y30

I agree with this, but I think I'm making a somewhat different point. 

An extinction event tomorrow would create significant certainty, in the sense that it determines the future outcome. But its value is still highly uncertain, because the sign of the curtained future is unknown. A bajillion years is a long time, and I don't see any reason to presume that a bajillion years of increasing technological power and divergence from the 21st century human experience will be positive on net. I hope it is, but I don't think my hope resolves the sign uncertainty. 

trurl3y10

I agree that longtermist priorities tend to also be beneficial in the near-term, and that sign uncertainty is perhaps a more central consideration than the initial post lets on.

However, I do want to push back on the voting example. I think the point about small probabilities mattering in an election holds if, as you say, we assume we know who the best candidate is. But it seems unlikely to me that we can ever have such sign certainty on a longtermist time-horizon. 

To illustrate this, I'd like to reconsider the voting example in the context of a long time-horizon. Can we ever know which candidate is best for the longterm future? Even if we imagine a highly incompetent or malicious leader, the flow-through effects of that person's tenure in office are highly unpredictable over the longterm. For any bad leader you identify from the past, a case could be made that the counterfactual where they weren't in power would have been worse. And that's only over years, decades, or centuries. If humanity has a very long future, the longterm impacts are much, much more uncertain than that.