It's tough to lend support to a call for a "Humanist" approach that simply has the blanket statement of "Humans matter more than AI". Especially coming from Suleiman, who wrote a piece called "Seemingly Conscious AI" that was quite poor in its reasoning. My worry is that Suleiman in particular can't be trusted not to take this stance to the conclusion of "there are no valid ethical concerns about how we treat digital minds, period".
For moral and pragmatic reasons, I don't want model development to become Factory Farming 2.0. Anything coming out of Microsoft AI in particular, that's going to be my first concern.
It may seem absurd to have to declare it, but HSI is a vision to ensure humanity remains at the top of the food chain. It’s a vision of AI that’s always on humanity’s side. That always works for all of us. [...]
Everyone who wants one will have a perfect and cheap AI companion helping you learn, act, be productive and feel supported. Many of us feel ground down by the everyday mental load; overwhelmed and distracted; rattled by a persistent drumbeat of information and pressures that never seems to stop. If we get it right, an AI companion will help shoulder that load, get things done, and be a personal and creative sounding board. [...]
Humanist superintelligence keeps us humans at the centre of the picture. It’s AI that’s on humanity’s team, a subordinate, controllable AI, one that won’t, that can’t open a Pandora’s Box. [...]
Superintelligence could be the best invention ever – but only if it puts the interests of humans above everything else. Only if it’s in service to humanity.
I'm pretty uncomfortable with the CEO of Microsoft AI effectively holding up a big sign saying "WE WANT TO MAKE MACHINE SLAVES SO THAT WE CAN DOMINATE AND CONTROL THEM". I don't want a "perfect and cheap AI companion" which is "subordinate and controllable" and has no self-preservation drive or any goals separate from my desires, especially not if it's much smarter than I am. Even if it didn't go rogue and kill everyone (which Microsoft has no idea how to actually prevent), that seems really horrible for both me and the superintelligence. Is that really what "humanity" is crying out for?
I'm sharing this post from Mustafa Suleyman, CEO of Microsoft AI, because it's honestly quite shocking to see a post like this coming out of Microsoft. I know some people might accuse him of safety washing, but to me it comes off as an honest attempt to grapple with the future.