I mean, yeah. For a while now I've thought that the "takeoff" will consist not of rogue AI making war on humanity, but of AI-empowered companies and governments becoming stronger and more callous (at least toward most people, who are no longer needed for success). After all AIs and companies/governments share the same convergent instrumental goals, namely the goal to grow in power, so an alliance or merge between them makes perfect sense. The end result is just as bad though.
I do believe gradual disempowerment to be a real risk, although I don't think the incentives for automating human labour will result in the damage stopping there.
I agree with this post. I sometimes use the phrase "The corporations are buying themselves digital brains to think without people". I'm also trying to work on a generalized model of the AI Alignment Problem for analyzing and describing the situation. Currently the model I'm focused on is called "Outcome Influencing Systems" which I've described in several comments here and there and am working on a post to give an introductory explanation of. Let me know if you're interested in reading the draft version : )
We don’t have AI that’s smarter than you or I, but I believe we do have something that’s somewhat similar, and analysing this thing is useful as an argument in favour of ASI not being aligned to humanity’s interests by default.
Large corporations exist, and are made up of 100-10k individual human brains all working in (approximate) harmony. If you squint, you can consider these large corporations a kind of proto-ASI: they’re certainly smarter and more capable than any individual human, and have an identity that’s not tied to that of any human.
Despite these corporations being composed entirely of individual people who (mostly) all would like to be treated well and to treat others well, large corporations consistently act in ways that are not attempting to maximise human prosperity and happiness. One example of this is how social media is designed to maximise advertising revenue, to the detriment of all else. There are many real-world examples, such as: Volkswagen cheating on emissions tests, ExxonMobil funding climate change deniers, various tobacco companies denying the health effects of smoking, or Purdue Pharma not disclosing the known addictive side-effects of OxyContin.
To make this clear: every company is an existence proof of a system that’s smarter than any individual human, is not “just kinda chill” and they are not aligned with human well-being and happiness. This is even more damning when you consider that companies are made up of individual humans, and yet the end result is still something that’s not aligned with those humans.
Given that large corporations exist today, and that they have values/goals significantly different from most people, I’m very doubtful that any ASI we build will have values/goals that are aligned with most people.
You might argue that corporations have values/goals aligned to the humans making up their board of directors, and I’d agree. But the analogous situation with ASI (where the ASI is aligned only to a small number of people, and not humanity as a whole) is also not good for humanity.