You can have some fun with people whose anticipations get out of sync with what they believe they believe.
I was once at a dinner party, trying to explain to a man what I did for a living, when he said: "I don't believe Artificial Intelligence is possible because only God can make a soul."
At this point I must have been divinely inspired, because I instantly responded: "You mean if I can make an Artificial Intelligence, it proves your religion is false?"
He said, "What?"
I said, "Well, if your religion predicts that I can't possibly make an Artificial Intelligence, then, if I make an Artificial Intelligence, it means your religion is false. Either your religion allows that it might be possible for me to build an AI; or, if I build an AI, that disproves your religion."
There was a pause, as the one realized he had just made his hypothesis vulnerable to falsification, and then he said, "Well, I didn't mean that you couldn't make an intelligence, just that it couldn't be emotional in the same way we are."
I said, "So if I make an Artificial Intelligence that, without being deliberately preprogrammed with any sort of script, starts talking about an emotional life that sounds like ours, that means your religion is wrong."
He said, "Well, um, I guess we may have to agree to disagree on this."
I said: "No, we can't, actually. There's a theorem of rationality called Aumann's Agreement Theorem which shows that no two rationalists can agree to disagree. If two people disagree with each other, at least one of them must be doing something wrong."
We went back and forth on this briefly. Finally, he said, "Well, I guess I was really trying to say that I don't think you can make something eternal."
I said, "Well, I don't think so either! I'm glad we were able to reach agreement on this, as Aumann's Agreement Theorem requires." I stretched out my hand, and he shook it, and then he wandered away.
A woman who had stood nearby, listening to the conversation, said to me gravely, "That was beautiful."
"Thank you very much," I said.
Part of the sequence Mysterious Answers to Mysterious Questions
Next post: "Professing and Cheering"
Previous post: "Belief in Belief"
“It was a bludgeoning by someone with training and practice in logical reasoning on someone without.”
I’m inclined to agree. I also found it less than convincing.
Let’s put aside the question of whether intelligence indicates the presence of a soul (although I’ve known more than a few highly intelligent people that are also morally bankrupt).
If it’s true that you can disprove his religion by building an all-encompassing algorithm that passes as a pseudo-soul, then the inverse must also be true. If you can’t quantify all the constituent parts of a soul, then you would have to accept that his religion offers a better explanation of the nature of being than AI. So you would have to start believing his religion until a better explanation presents itself. That seems fair, no?
If you can’t make that leap, then now would be a good time to examine your motives for any satisfaction you felt at his mauling. I’d argue your enjoyment is less about debating ability, and more about the enjoyment of putting the “uneducated” in their place.
So let’s consider the emotion compassion. You can design an algorithm so that it knows was compassionate behaviour looks like. You could also design it so that it learns when this behaviour is appropriate. But at no point is your algorithm actually “feeling” compassion, even if it’s demonstrating it. It’s following a set of predefined rules (with perhaps some randomness and adaptation built in) because it believes it’s advantageous or logical to do so. If this was a human being, we’d apply the label “sociopath”. That, to me, is a critical distinction between AI and soul.
Debates like these take all the fun right out of AI. It’s disappointing that we need to debate the merits of tolerance on forums like this one.