From what I've seen in discussions over the future of humanity, the following options are projected, from worst to best:
Collapse of humanity due to AGI taking over and killing everyone
Collapse of humanity, but some humans remain alive in "The Matrix" or "zoo" of some kind
Collapse due to existing technology like nuclear bombs
Collapse due to climate change or meteor or mega-volcano or alien invasion
Collapse due to exhausting all useful elements and sources of energy
Reversal to pre-20th century levels and stagnation due to combination of 2/3/4
Stagnation at 21st century level tech
Stagnation at a non-crazy level of tech - say good enough to have a colony on Pluto, but no human ever leaving the Solar system
Interstellar civ based on "virtual humans"/"brains in a jar"
Interstellar civ run by fully aligned AGI, with human intelligence being so weak that it basically plays no role
Interstellar civilization primarily based on human intelligence (Star Trek pre-Data?)
Interstellar civ based on humans+AGI working together peacefully (Star Trek's implied future given Data's evolution?)
Multiverse civilization of some kind (Star Trek's Q?)
Is this ranking approximately correct? If so, why do we care so much if "AGI" or "virtual humans" end up ruling the universe? Does it make a difference if the AGI is based on human intelligence and not on some alien brain structure, given that biological humans will stagnate/die out in both cases? Or is "virtual humans" just as bad of an outcome and falls into the same bucket of "unaligned AGI"? What goal are we truly trying to optimize here?
From what I've seen in discussions over the future of humanity, the following options are projected, from worst to best:
Is this ranking approximately correct? If so, why do we care so much if "AGI" or "virtual humans" end up ruling the universe? Does it make a difference if the AGI is based on human intelligence and not on some alien brain structure, given that biological humans will stagnate/die out in both cases? Or is "virtual humans" just as bad of an outcome and falls into the same bucket of "unaligned AGI"? What goal are we truly trying to optimize here?