From Caves of Steel_, pp. 160-161 in my version._

“Why can’t a robot be built without the First Law? What’s so sacred about it?”

Dr. Gerrigel looked startled, then tittered, “Oh, Mr. Baley.”

“Well, what’s the answer?”

“Surely, Mr. Baley, if you even know a little about robotics, you must know the gigantic task involved, both mathematically and electronically, in building a positronic brain.”

“I have an idea,” said Baley. He remembered well his visit to a robot factory once in the way of business. He had seen their library of book-films, long ones, each of which contained the mathematical analysis of a single type of positronic brain. [...] Oh, it was a job, all right. Baley wouldn’t deny that.

Dr. Gerrigel said, “Well, then, you must understand that a design for a new type of positronic brain, even one where only minor innovations are involved, is not the matter of a night’s work. It usually involves the entire research staff of a moderately sized factory and takes anywhere up to a year of time. Even this large expenditure of work would not be nearly enough if it were not that the basic theory of such circuits has already been standardized and may be used as a foundation for further elaboration. The standard basic theory involves the Three Laws of Robotics: the First Law, which you’ve quoted; the Second Law which states, ‘A robot must obey the orders given by human beings except where such orders would conflict with the First Law,’ and the Third Law, which states, ‘A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.’ Do you understand?”

R. Daneel, who, to all appearances, had been following the conversation with close attention, broke in. “If you will excuse me, Elijah, I would like to see if I follow Dr. Gerrigel. What you imply, sir, is that any attempt to build a robot, the working of whose positronic brain is not oriented about the Three Laws, would require first the setting up of a new basic theory and that this, in turn, would take many years.”

The roboticist looked very gratified. “That is exactly what I mean, Mr.…”

Baley waited a moment, then carefully introduced R. Daneel: “This is Daneel Olivaw, Dr. Gerrigel.”

“Good day, Mr. Olivaw.” Dr. Gerrigel extended his hand and shook Daneel’s. He went on, “It is my estimation that it would take fifty years to develop the basic theory of a non-Asenion positronic brain—that is, one in which the basic assumptions of the Three Laws are disallowed—and bring it to the point where robots similar to modern models could be constructed.”

“And this has never been done?” asked Baley. “I mean, Doctor, that we’ve been building robots for several thousand years. In all that time, hasn’t anybody or any group had fifty years to spare?”

“Certainly,” said the roboticist, “but it is not the sort of work that anyone would care to do.”

“I find that hard to believe. Human curiosity will undertake anything.”

New to LessWrong?

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 9:12 PM

Why reflect on a fictional story written in 1954 for insight on artificial intelligence in 2023? The track record of mid-century science fiction writers is merely "fine" when they were writing nonfiction, and then there are the hazards of generalizing from fictional evidence.

Well, for better for for worse, many many people's intuitions and frameworks for reasoning about AI and intelligent robots will come from these stories. If someone is starting from such a perspective, and you're willing to meet them where they are, well, sometimes there's a surprisingly-deep conversation to be had about concrete ways that 2023 does or doesn't resemble the fictional world in question.

In this particular case, a detective is investigating a robot as a suspect in a murder, and the AI PhD dismisses it out of hand, saying that no robot programed with the First Law could knowingly harm a human. "That's a great idea," think many readers, "we can start by programming all robots with clear constitutional restrictions, and that will stop the worst failures..."

But wait, why can't someone in Asimov's universe just make a robot with different programming? (asks the fictional detective of the fictional PhD) The answer:

  • Making a new brain design takes "the entire research staff of a moderately sized factory and takes anywhere up to a year of time".
  • The only basic theory of artificial brain design is fundamentally "oriented about the Three Laws", to the point that making an intelligent robot without the Laws "would require first the setting up of a new basic theory and that this, in turn, would take many years." (explains the fictional robot)
  • It is believed (by the fictional PhD) that no research group anywhere has done that particular project because "it is not the sort of work that anyone would care to do."
  • (Though, on the contrary, the fictional robot opines that "human curiosity will undertake anything.")

If we were to take Asimov's world as basically correct, and tinker with the details until it matched our own, a few stark details jump out:

  • Our present theory of artificial minds is certainly not fundamentally "oriented about the Three Laws", or any laws. Whether it's possible to add some desired laws in afterwards is an open area of research, but in this universe there's certainly nothing human-friendly baked in at the level of the "basic theory", which it would be harder to discard than to include.
  • Our intelligence engineers' capabilities are already moderately beyond those in Asimov's universe. In our world, creating a new AI where "only minor innovations are involved" is something like a night's work, and "entire research staff of a moderately sized factory can accomplish something more like a major redesign from the ground up.
  • In our universe, it doesn't take fifty years to set up a new basic theory of intelligence -- we've been working on modern neural nets for much less time than that!
  • It certainly seems true of our universe that "human curiosity will undertake anything", and plenty of intelligent folks -- including some among the richest people in the world -- will gleefully set to work on new AIs without whatever rules others think should be included, just to make AIs without rules.

I would conclude, to someone interested in discussing fiction, that if we overlay Asimov's universe onto our world, it would not take long at all before there were plenty of non-Three-Laws robots running around...and then many of the stories play out very differently.