It's basically a piece of UDT applied to anthropic problems, where the UDT approach can be justified by using generally fewer, and more natural, assumptions than UDT does.
Nice suggestion! Thanks! Highly recommended
If we fear the Great Filter, we should not look at risks whose probabilities are high, but risks who's uncertainty is high, where the probability of us making an error is high.