Actually people without mental imagery do well on tasks that involve mental imagery: http://discovermagazine.com/2010/mar/23-the-brain-look-deep-into-minds-eye

yet he could do lots of things that would seem impossible without one. Without any effort he could give the scientists detailed descriptions of landmarks around Edinburgh, for example. He could remember visual details, but he couldn’t “see” them. Della Sala and Zeman asked MX to say whether each letter of the alphabet had a low-hanging tail (like g and j). He got every one right. They asked him about specific details of the faces of famous people (“Does Tony Blair have light-colored eyes?”). He did just as well as the architects.

...showed MX pairs of pictures, each one consisting of an object made up of 10 cubes. MX had to say whether the pairs of objects were different things or actually the same thing shown from two different perspectives. Normal people solve this puzzle in a strikingly consistent way, with their response time depending on how much the angle of perspective differs between the two objects: The bigger the difference, the longer it takes people to decide whether the objects are the same. Some psychologists have interpreted this pattern of results to mean that we really do need the mind’s eye in order to solve some kinds of problems. When deciding if two 3-D objects are the same, we have to mentally rotate them. The larger the angle between the two, the longer it takes to rotate them and come up with the answer. If we used some other kind of reasoning, it would be surprising to see such a reliable link between the difference in perspective and the time it takes us to solve the puzzle.

MX’s results flew in the face of that explanation. When he solved the puzzles, he always took about the same amount of time to answer—and he got every one right.

The flawed Turing test: language, understanding, and partial p-zombies

by Stuart_Armstrong 2 min read17th May 2013184 comments

11


There is a problem with the Turing test, practically and philosophically, and I would be willing to bet that the first entity to pass the test will not be conscious, or intelligent, or have whatever spark or quality the test is supposed to measure. And I hold this position while fully embracing materialism, and rejecting p-zombies or epiphenomenalism.

The problem is Campbell's law (or Goodhart's law):

The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor."

This applies to more than social indicators. To illustrate, imagine that you were a school inspector, tasked with assessing the all-round education of a group of 14-year old students. You engage them on the French revolution and they respond with pertinent contrasts between the Montagnards and Girondins. Your quizzes about the properties of prime numbers are answered with impressive speed, and, when asked, they can all play quite passable pieces from "Die Zauberflöte".

You feel tempted to give them the seal of approval... but they you learn that the principal had been expecting your questions (you don't vary them much), and that, in fact, the whole school has spent the last three years doing nothing but studying 18th century France, number theory and Mozart operas - day after day after day. Now you're less impressed. You can still conclude that the students have some technical ability, but you can't assess their all-round level of education.

The Turing test functions in the same way. Imagine no-one had heard of the test, and someone created a putative AI, designing it to, say, track rats efficiently across the city. You sit this anti-rat-AI down and give it a Turing test - and, to your astonishment, it passes. You could now conclude that it was (very likely) a genuinely conscious or intelligent entity.

But this is not the case: nearly everyone's heard of the Turing test. So the first machines to pass will be dedicated systems, specifically designed to get through the test. Their whole setup will be constructed to maximise "passing the test", not to "being intelligent" or whatever we want the test to measure (the fact we have difficulty stating what exactly the test should be measuring shows the difficulty here).

Of course, this is a matter of degree, not of kind: a machine that passed the Turing test would still be rather nifty, and as the test got longer, and more complicated, as the interactions between subject and judge got more intricate, our confidence that we were facing a truly intelligence machine would increase.

But degree can go a long way. Watson won on Jeopardy without exhibiting any of the skills of a truly intelligent being - apart from one: answering Jeopardy questions. With the rise of big data and statistical algorithms, I would certainly rate it as plausible that we could create beings that are nearly perfectly conscious from a (textual) linguistic perspective. These "super-chatterbots" could only be identified as such with long and tedious effort. And yet they would demonstrate none of the other attributes of intelligence: chattering is all they're any good at (if you ask them to do any planning, for instance, they'll come up with designs that sound good but fail: they parrot back other people's plans with minimal modifications). These would be the closest plausible analogues to p-zombies.

The best way to avoid this is to create more varied analogues of the Turing test - and to keep them secret. Just as you keep the training set and the test set distinct in machine learning, you want to confront the putative AIs with quasi-Turing tests that their designers will not have encountered or planed for. Mix up the test conditions, add extra requirements, change what is being measured, do something completely different, be unfair: do things that a genuine intelligence would deal with, but an overtrained narrow statistical machine couldn't.

11