the pathways to AI x-risk ultimately require a society where relying on — and trusting — algorithms for making consequential decisions is not only commonplace, but encouraged and incentivized
this is wrong, of course. the whole point of alignment, the thing that makes AI doom a particular type of risk, is that highly capable AI takes over the world on its own just fine. it does not need us to put it in charge of our institutions, it just takes over everything regardless of our trust or consent.
all it takes is one team, somewhere, to build the wrong piece of software, and a few days~months later all life on earth is dead forever. AI doom is not a function of adoption, it's a function of the-probability-that-on-any-given-day-some-team-builds-the-thing-that-will-take-over-the-world-and-kill-everyone.
(this is why i think we lose control of the future in 0 to 5 years, rather than much later)
The Gradient is a “digital publication about artificial intelligence and the future,” founded by researchers at the Stanford Artificial Intelligence Laboratory. I found the latest essay, “The Artificiality of Intelligence,” by a PhD student at UC Berkeley, to be an interesting perspective from AI ethics/fairness.
Some quotes I found especially interesting: