There is no contradiction between AI carrying huge potential risks, and it carrying huge potential upsides if we navigate the risks. Both are a consequence of the prospect of AI becoming extremely powerful.
The benefits that human-aligned AGI could bring are a major part of what motivates researchers to build such systems. Things we can confidently expect based on extrapolating from current technology include:
Examples that are more speculative, but seem physically feasible, include:
But to get these benefits, we’d have to ensure not only that we use these systems for good instead of for ill, but that they can be human-aligned at all. We’d have to get our act together, but many experts worry that we’re on a path to failure.