Certainly the Turing test can be viewed as an operationalization of "does this machine think?". No argument there. I also agree with you concerning what Turing probably had in mind.

The problem is that if we have in mind (perhaps not even explicitly) some different definition of thinking or, gods forbid, some other property entirely, like "consciousness", then the Turing test immediately stops being of much use.

Here is a related thing. John Searle, in his essay "Minds, Brains, and Programs" (where he presents the famous "Chinese room" thought experiment), claims that even if you a) place the execution of the "Chinese room" program into a robot body, which is then able to converse with you in Chinese, or b) simulate the entire brain of a native Chinese speaker neuron-by-neuron, and optionally put that into a robot body, you will still not have a system that possesses true understanding of Chinese.

Now, taken to its logical extreme, this is surely an absurd position to take in practice. We can imagine a scenario where Searle meets a man on the street, strikes up a conversation (perhaps in Chinese), and spends some time discoursing with the articulate stranger on various topics from analytic philosophy to dietary preferences, getting to know the man and being impressed with his depth of knowledge and originality of thought, until at some point, the stranger reaches up and presses a hidden button behind his ear, causing the top of his skull to pop open and reveal that he is in fact a robot with an electronic brain! Dun dun dun! He then hands Searle a booklet detailing his design specs and also containing the entirety of his brain's source code (in very fine print), at which point Searle declares that the stranger's half of the entire conversation up to that point has been nothing but the meaningless blatherings of a mindless machine, devoid entirely of any true understanding.

It seems fairly obvious to me that such entities would, like humans, be beneficiaries of what Turing called "the polite convention" that people do, in fact, think (which is what lets us not be troubled by the problem of other minds in day-to-day life). But if someone like John Searle were to insist that we nonetheless have no direct evidence for the proposition that the robots in question do "think", I don't see that we would have a good answer for him. (Searle's insistence that we shouldn't question whether humans can think is, of course, hypocritical, but that is not relevant here.) Social conventions to treat something as being true do not constitute a demonstration that said thing is actually true.

I agree with you concerning Searle's errors (see my takes on Searle at http://lesswrong.com/lw/ghj/searles_cobol_room/ http://lesswrong.com/lw/gyx/ai_prediction_case_study_3_searles_chinese_room/ )

I think the differences between us are rather small, in fact. I do have a different definition of thinking, which is not fully explicit. It would go along the lines of "a thinking machine should demonstrate human-like abilities in most situations and not be extremely stupid in some areas". The intuition is that if there is a general intelligence, rathe... (read more)

2TheOtherDave7yIt is perhaps worth noting that Searle explicitly posits in that essay that the system is functioning as a Giant Lookup Table. If faced with an actual GLUT Chinese Room... well, honestly, I'm more inclined to believe that I'm being spoofed than trust the evidence of my senses. But leaving that aside, if faced with something I somehow am convinced is a GLUT Chinese Room, I have to rethink my whole notion of how complicated conversation actually is, and yeah, I would probably conclude that the entire conversation up to that point has been devoid entirely of any true understanding. (I would also have to rethink my grounds for believing that humans have true understanding.) I don't expect that to happen, though.
0[anonymous]7yThis seems like a slightly uncharitable reading of Searle's position.

The flawed Turing test: language, understanding, and partial p-zombies

by Stuart_Armstrong 2 min read17th May 2013184 comments


There is a problem with the Turing test, practically and philosophically, and I would be willing to bet that the first entity to pass the test will not be conscious, or intelligent, or have whatever spark or quality the test is supposed to measure. And I hold this position while fully embracing materialism, and rejecting p-zombies or epiphenomenalism.

The problem is Campbell's law (or Goodhart's law):

The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor."

This applies to more than social indicators. To illustrate, imagine that you were a school inspector, tasked with assessing the all-round education of a group of 14-year old students. You engage them on the French revolution and they respond with pertinent contrasts between the Montagnards and Girondins. Your quizzes about the properties of prime numbers are answered with impressive speed, and, when asked, they can all play quite passable pieces from "Die Zauberflöte".

You feel tempted to give them the seal of approval... but they you learn that the principal had been expecting your questions (you don't vary them much), and that, in fact, the whole school has spent the last three years doing nothing but studying 18th century France, number theory and Mozart operas - day after day after day. Now you're less impressed. You can still conclude that the students have some technical ability, but you can't assess their all-round level of education.

The Turing test functions in the same way. Imagine no-one had heard of the test, and someone created a putative AI, designing it to, say, track rats efficiently across the city. You sit this anti-rat-AI down and give it a Turing test - and, to your astonishment, it passes. You could now conclude that it was (very likely) a genuinely conscious or intelligent entity.

But this is not the case: nearly everyone's heard of the Turing test. So the first machines to pass will be dedicated systems, specifically designed to get through the test. Their whole setup will be constructed to maximise "passing the test", not to "being intelligent" or whatever we want the test to measure (the fact we have difficulty stating what exactly the test should be measuring shows the difficulty here).

Of course, this is a matter of degree, not of kind: a machine that passed the Turing test would still be rather nifty, and as the test got longer, and more complicated, as the interactions between subject and judge got more intricate, our confidence that we were facing a truly intelligence machine would increase.

But degree can go a long way. Watson won on Jeopardy without exhibiting any of the skills of a truly intelligent being - apart from one: answering Jeopardy questions. With the rise of big data and statistical algorithms, I would certainly rate it as plausible that we could create beings that are nearly perfectly conscious from a (textual) linguistic perspective. These "super-chatterbots" could only be identified as such with long and tedious effort. And yet they would demonstrate none of the other attributes of intelligence: chattering is all they're any good at (if you ask them to do any planning, for instance, they'll come up with designs that sound good but fail: they parrot back other people's plans with minimal modifications). These would be the closest plausible analogues to p-zombies.

The best way to avoid this is to create more varied analogues of the Turing test - and to keep them secret. Just as you keep the training set and the test set distinct in machine learning, you want to confront the putative AIs with quasi-Turing tests that their designers will not have encountered or planed for. Mix up the test conditions, add extra requirements, change what is being measured, do something completely different, be unfair: do things that a genuine intelligence would deal with, but an overtrained narrow statistical machine couldn't.