We don’t have AI that’s smarter than you or I, but I believe we do have something that’s somewhat similar, and analysing this thing is useful as an argument in favour of ASI not being aligned to humanity’s interests by default.
epistemic status: I largely believe this argument to be correct, although it’s quite hand-wavy and pleads-to-analogy a bit more than I’d like. Despite (or possibly because of) this, I’ve found it incredibly useful in motivating to (non-technical) relatives and friends why I don’t believe ASI would “just be kinda chill”. While the argument might be flawed, I strongly believe the conclusion is correct mostly due to more thorough arguments that are trickier to explain to relatives over Christmas dinner.
Large corporations exist, and are made up of 100-10k individual human brains all working in (approximate) harmony. If you squint, you can consider these large corporations a kind of proto-ASI: they’re certainly smarter and more capable than any individual human, and have an identity that’s not tied to that of any human.
To make this clear: every company is an existence proof of a system that’s smarter than any individual human, is not “just kinda chill” and they are not aligned with human well-being and happiness. This is even more damning when you consider that companies are made up of individual humans, and yet the end result is still something that’s not aligned with those humans.
Given that large corporations exist today, and that they have values/goals significantly different from most people, I’m very doubtful that any ASI we build will have values/goals that are aligned with most people.
You might argue that corporations have values/goals aligned to the humans making up their board of directors, and I’d agree. But the analogous situation with ASI (where the ASI is aligned only to a small number of people, and not humanity as a whole) is also not good for humanity.
We don’t have AI that’s smarter than you or I, but I believe we do have something that’s somewhat similar, and analysing this thing is useful as an argument in favour of ASI not being aligned to humanity’s interests by default.
Large corporations exist, and are made up of 100-10k individual human brains all working in (approximate) harmony. If you squint, you can consider these large corporations a kind of proto-ASI: they’re certainly smarter and more capable than any individual human, and have an identity that’s not tied to that of any human.
Despite these corporations being composed entirely of individual people who (mostly) all would like to be treated well and to treat others well, large corporations consistently act in ways that are not attempting to maximise human prosperity and happiness. One example of this is how social media is designed to maximise advertising revenue, to the detriment of all else. There are many real-world examples, such as: Volkswagen cheating on emissions tests, ExxonMobil funding climate change deniers, various tobacco companies denying the health effects of smoking, or Purdue Pharma not disclosing the known addictive side-effects of OxyContin.
To make this clear: every company is an existence proof of a system that’s smarter than any individual human, is not “just kinda chill” and they are not aligned with human well-being and happiness. This is even more damning when you consider that companies are made up of individual humans, and yet the end result is still something that’s not aligned with those humans.
Given that large corporations exist today, and that they have values/goals significantly different from most people, I’m very doubtful that any ASI we build will have values/goals that are aligned with most people.
You might argue that corporations have values/goals aligned to the humans making up their board of directors, and I’d agree. But the analogous situation with ASI (where the ASI is aligned only to a small number of people, and not humanity as a whole) is also not good for humanity.