Why reflect on a fictional story written in 1954 for insight on artificial intelligence in 2023? The track record of mid-century science fiction writers is merely "fine" when they were writing nonfiction, and then there are the hazards of generalizing from fictional evidence.
Well, for better for for worse, many many people's intuitions and frameworks for reasoning about AI and intelligent robots will come from these stories. If someone is starting from such a perspective, and you're willing to meet them where they are, well, sometimes there's a surprisingly-deep conversation to be had about concrete ways that 2023 does or doesn't resemble the fictional world in question.
In this particular case, a detective is investigating a robot as a suspect in a murder, and the AI PhD dismisses it out of hand, saying that no robot programed with the First Law could knowingly harm a human. "That's a great idea," think many readers, "we can start by programming all robots with clear constitutional restrictions, and that will stop the worst failures..."
But wait, why can't someone in Asimov's universe just make a robot with different programming? (asks the fictional detective of the fictional PhD) The answer:
If we were to take Asimov's world as basically correct, and tinker with the details until it matched our own, a few stark details jump out:
I would conclude, to someone interested in discussing fiction, that if we overlay Asimov's universe onto our world, it would not take long at all before there were plenty of non-Three-Laws robots running around...and then many of the stories play out very differently.
From Caves of Steel_, pp. 160-161 in my version._
“Why can’t a robot be built without the First Law? What’s so sacred about it?”
Dr. Gerrigel looked startled, then tittered, “Oh, Mr. Baley.”
“Well, what’s the answer?”
“Surely, Mr. Baley, if you even know a little about robotics, you must know the gigantic task involved, both mathematically and electronically, in building a positronic brain.”
“I have an idea,” said Baley. He remembered well his visit to a robot factory once in the way of business. He had seen their library of book-films, long ones, each of which contained the mathematical analysis of a single type of positronic brain. [...] Oh, it was a job, all right. Baley wouldn’t deny that.
Dr. Gerrigel said, “Well, then, you must understand that a design for a new type of positronic brain, even one where only minor innovations are involved, is not the matter of a night’s work. It usually involves the entire research staff of a moderately sized factory and takes anywhere up to a year of time. Even this large expenditure of work would not be nearly enough if it were not that the basic theory of such circuits has already been standardized and may be used as a foundation for further elaboration. The standard basic theory involves the Three Laws of Robotics: the First Law, which you’ve quoted; the Second Law which states, ‘A robot must obey the orders given by human beings except where such orders would conflict with the First Law,’ and the Third Law, which states, ‘A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.’ Do you understand?”
R. Daneel, who, to all appearances, had been following the conversation with close attention, broke in. “If you will excuse me, Elijah, I would like to see if I follow Dr. Gerrigel. What you imply, sir, is that any attempt to build a robot, the working of whose positronic brain is not oriented about the Three Laws, would require first the setting up of a new basic theory and that this, in turn, would take many years.”
The roboticist looked very gratified. “That is exactly what I mean, Mr.…”
Baley waited a moment, then carefully introduced R. Daneel: “This is Daneel Olivaw, Dr. Gerrigel.”
“Good day, Mr. Olivaw.” Dr. Gerrigel extended his hand and shook Daneel’s. He went on, “It is my estimation that it would take fifty years to develop the basic theory of a non-Asenion positronic brain—that is, one in which the basic assumptions of the Three Laws are disallowed—and bring it to the point where robots similar to modern models could be constructed.”
“And this has never been done?” asked Baley. “I mean, Doctor, that we’ve been building robots for several thousand years. In all that time, hasn’t anybody or any group had fifty years to spare?”
“Certainly,” said the roboticist, “but it is not the sort of work that anyone would care to do.”
“I find that hard to believe. Human curiosity will undertake anything.”