The recent news of Eugene Goostman passing the turing test has raised all kinds of debate around what counts as passing the turing tests and whether or not the chatbots are attempting to do it properly.

The Turing Test is not a good criterion for machine intelligence.

The computer's capability to trick humans via imitation in a conversation is an amazing prediction. I think it's important to discern what it means.

For a machine to be able to imitate human language and reasoning succesfully it would essentially need to have intelligence much above the average human. A general intelligence not specifically designed to be a fake human would need to be able to model human behavior and derive from that model communication that was misleading from the AI's "true nature".

Computers supremacy over humans in the boardgame of chess has been a common motif in AI discussion since Kasparov lost to Deep Blue in the 90's. Yet no one is trying to claim that the chesscalculators would have learned to play chess in a similar fashion to humans and would rely on a similar logic as we do. I'm not an expert on programming, AI nor chess, but it stills seems obvious that it would be improper to use thecomputers' current superiority to humans as a solid proof of their high general intelligence - one that is capable of imitating humans and play chess like humans do.

Goal structure for deception vs. Crafted set of tricks and "repeat after me"

For an AI to truly participate in the Turing Test it would need to be self-aware. In addition to being self-aware a goalstructure would be required, and that should include incentive to deceive humans to think that the AI is a human too. More specifically cognitively pretending not to be you, would require self-awareness. This would be very sophisticated and subtle. It's hard for many humans to pretend being someone else - though some excel at it -  despite us having a built-in capacity for empathy and already having nearly identical brains. To do the same with an internal "mental" structure that might not be anything like ours would in my opinion require an intelligence on level above the average human or a designed set of tricks.

Are the "Artificial Intelligences" that attempt to pass the Turing Test intelligent at all? To me it seems that the chatbots are essentially one-trick-ponies that merely "repeat after me". Somebody carefully designs an automated way of picking words that tricks the average joe by avoiding conversation or interaction of substance. Computer's vast capacity for storage and recall make them good for memorizing a lot of tricks.

What is actually being done in the Turing Test is not a measurement of intelligence. It is an attempt to find an automated means for tricking a human to think that they're talking to someone else, which does not require an intelligent agent. This seems similar to having a really convincing answering machine for your telephone.

New Comment
5 comments, sorted by Click to highlight new comments since: Today at 8:31 AM

quite willing to accept the imitation game as a test. The game (with the player B omitted) is frequently used in practice under the name of viva voce to discover whether some one really understands something or has "learnt it parrot fashion."

For an AI to truly participate in the Turing Test it would need to be self-aware.

Define "truly" in a non-circular way.

[-][anonymous]10y00

Like a chess computer participating in a chess tournament to trick other players into thinking of it as an actual human participant isn't a good way of testing the chessprogram's ability to play chess or it's intelligence , the turing test is not a good way of testing the machine's intelligence.

What I intended to say with the choice of word "truly" was that the tests are not really about testing the AI, they're about tricking humans. For them to be about testing the AI, something else is required. In my opinion for a computer to really do the same it would rather be that the AI learns how to trick humans and attempts to keep track on how he is doing, and has a reward system that directs the progression of the behavior of the AI. If there was a reward system implemented externally - people would modify the code of the AI and add inputs for the reward system which to them are good indicators of tricked humans, then even a not very intelligent AI could adapt into a behavior pattern which would end up tricking humans, but it would still do it without really understanding any of it. But even more likely than that, humans are going to design a way of making conversation that is likely to trick other humans, and then implement a code for a program, that follows this path via some sort of neural networks or bruteforce based on prior inputs. To me that is equivalent of creating a trick that has nothing to do with AI, which succesfully deceives a human. Kind of like an answering machine....

In my opinion the Turing Test does not amount to intelligence of the program or the machine. It is not a good indicator for it.

[This comment is no longer endorsed by its author]Reply
[-]Jiro10y00

What I intended to say with the choice of word "truly" was that the tests are not really about testing the AI, they're about tricking humans. For them to be about testing the AI, something else is required. In my opinion for a computer to really do the same it would rather be that the AI learns how to trick humans and attempts to keep track on how he is doing...

The idea is that tricking humans is a complicated enough process that anything that is capable of doing so would have to be intelligent anyway.

The fact that nothing has passed the test yet indicates that it's not as easy as you make it sound.

For an AI to truly participate in the Turing Test it would need to be self-aware.

That's an amazing claim. I can't imagine how you could test that. Any suggestions would be welcomed.

I suspect that self-awareness is evolved and has evolutionary advantages. I've never been able to think of a way to test for this though.