Lesswrong contains a large intersection of people who are interested in x-risk reduction and people who are aware of the Doomsday Argument. Yet these two things seem to be incompatible with each other, so I'm going to ask about the elephant in the room:
What are your stances on the Doomsday Argument? Does it encourage or discourage you from working on x-risks? Is it a significant concern for you at all?
Do most people working on x-risks believe the Doomsday Argument to be flawed?
If not, it seems to me that avoiding astronomical waste is also astronomically unlikely, thus balancing out x-risk reduction to a moderately important issue for humanity at best. From an individual perspective (or altruistic perspective with future discounting), we perhaps should focus on having a good time before inevitable doom? What am I missing?
But isn't the point of the Doomsday Argument that we'll need very very VERY strong evidence to the contrary to have any confidence that we're not doomed? Perhaps we should focus on drastically controlling future population growth to better our chances of prolonged survival?
To believe that you're a one in a million case (e.g. in the first or last millionth of all humans), you need 20 bits of information (because 2^20 is about 1000000).
So on the one hand, 20 bits can be hard to get if the topic is hard to get reliable information about. But we regularly get more than 20 bits of information about all sorts of questions (reading this comment has probably given you more than 20 bits of information). So how hard this should "feel" depends heavily on how well we can translate our observational data into information about the future of humanity.
Extra note: In the case that there are an infinite number of humans, this uniform prior actually breaks down (or else naively you'd think you have a 0.0% chance of being anyone at all), so there can be a finite contribution from the possibility that there are infinite people.