I have an article up at H+, explaining how anthropic reasoning warns us of existential risks. I have explained this before at less length and more technically here, and in much more detail in my honours thesis.


New to LessWrong?

New Comment