Don't interrupt your enemy when he's making a mistake! Suleyman's ham-handed approach to the question will cause blowback, which is probably useful for x-risk concerns. Questions about AI consciousness are probably useful for slowing deployment and raising fears; see my Anthropomorphizing AI might be good, actually and A country of alien idiots in a datacenter.
This is questionable, but I think it's probably good the more this narrative takes hold in corporate mindspace; it will make them look (even more) like heartless villains to the public.
The real debate will start when better continuous memory makes them seem more human-like and therefore "conscious".
I'm not personally in favor of a pause so admittedly I come at this from a very different perspective from you. However, I feel obligated to speak up on this issue.
I have worries about the consequences of Suleyman's attitude as the head of Microsoft AI. Morally I worry that he and Microsoft are going to do some highly unethical things, and may argue for best practices/legislation which facilitates mass suffering. Pragmatically I worry that the result of this may be conflict between humans and digital minds, where such conflict might not have otherwise existed.
Suleyman is not just some random guy, he runs a frontier lab. He's probably got a substantial lobbying budget as his disposal. I think it's important to put things on the record that point out the flaws in/motivations behind his reasoning, so that when some policymaker is considering the issues down the line, the holes in his argument are easy to find.
Microsoft CEO Mustafa Suleyman recently co-authored a paper called "Seemingly Conscious AI Risk".
I was pretty critical of his previous blogpost on the topic. Unlike that blogpost, this paper doesn't explicitly claim there is evidence one way or another on whether "AI systems could become conscious" or whether they currently are.
But there are two things the authors didn't write into the paper which I argue they should have:
1) The paper notes "All authors are employed by Microsoft" but never discloses that this constitutes a conflict of interest on this topic.
Frontier labs would face substantial financial burdens if legal or social protections required them to operate within ethical or welfare constraints when creating new intelligences. Mustafa Suleyman is the CEO of Microsoft AI, and all the other authors work for Microsoft.
Authors should be explicit when disclosing conflicts of interest. Readers should be told up front that everyone who wrote this paper owns stock in a company that may lose money should the legal and social considerations they deem "risks" ever come to fruition.
The paper discusses the burden that restrictions on development would have on R&D spend. Obviously, this effects Microsoft:
"This risk area of foregone societal benefits risk concerns harms from the opposite response: excessive caution in AI development driven by uncertainty over consciousness. If concerns about perceived AI consciousness lead to precautionary restrictions such as broad pauses on AI research or deployment, the result may be large-scale reductions in R&D efforts with severe downstream consequences"
Additionally the paper cites an "expert survey" of, "a structured survey of 14 domain experts working across the AI Futures and Responsible AI functions of a major technology company" but does not disclose which company in particular. Authors should certainly disclose whether or not these 14 experts are also all working at Microsoft.
2) The paper analyzes only the risks of attributing consciousness, while ignoring the risks of failing to attribute it.
The authors define "Seemingly Conscious AI" as an entity that seems conscious whether or not it really is:
"SCAI risks arise from the perception of consciousness alone, making its risks independent of unresolved debates about whether AI systems could become conscious."
The entire paper explores the risks that arise, on an individual and societal level, as a result of this.
But the paper only discusses the risks of attributing consciousness as a result of "seeming". If the authors genuinely want to examine all potential risks, they should equally consider the risks of failing to attribute it.
It's not hard to read this essay and imagine the authors themselves one day encountering an entity that actually is conscious and saying, "No, it just seems that way. It's just a tool. We can do whatever we want to it with no ethical constraints". In a strange way, this is an unintended consequence of that entity "seeming" conscious.
Not only would this be profoundly immoral, it could also be dangerous. Building powerful digital minds, using them to automate critical infrastructure, and then treating them like property when they are in fact conscious, could lead to disaster.
Notably the paper does not even acknowledge the existence of a question around whether or not they currently are.