It's always troubled me that the standard Turing test provides only a single-bit output, and that the human being questioned could throw the game to make their AI counterpart look good. Also, research and development gets entirely too much funding based on what sounds cool rather than what actually works. The following is an attempt to address both issues.

 

Take at least half a dozen chatbot AIs, and a similar number of humans with varying levels of communication skill (professional salespeople, autistic children, etc.). Each competitor gets a list of questions. A week later, to allow time for research and number-crunching, collect the answers. Whoever submitted question 1 receives all the answers to question 1 in a randomized order, and then ranks the answers from most human/helpful to least, with a big prize for the top and successively smaller prizes for runners-up. Alternatively, interrogators could specify a customized allocation of their question's rewards, e.g. "this was the best, these three are tied for second, the rest are useless."

 

The humans will do their best in that special way that only well-paid people can, and the chatbots will receive additional funding in direct proportion to their success at a highly competitive task.

 

Six hundred thousand seconds might seem like an awfully long time to let a supercomputer chew over it's responses, but the goal is deep reasoning, not just snappy comebacks. Programs can always be debugged and streamlined, or just run on more powerful future hardware, after the basic usefulness of the results has been demonstrated.

New Comment
8 comments, sorted by Click to highlight new comments since: Today at 8:48 AM

Still pointless.

Could you elaborate? Pointless with respect to which goals?

The Turing test was conceived as a thought experiment to define the meaning of the question 'can computers think'. This is obvious if one reads Turing's original paper, and as a thought experiment to establish Turing's point it works great. But I don't really see how actually carrying out the Turing test as a real experiment is going to tell us anything about the state of an AI that wasn't obvious anyway.

The humans will do their best in that special way that only well-paid people can.

There are people who would disagree about how to get humans to do their best. But for those who agree that money is the honey, the action is over at the X-prize foundation.

Q: So what does the future of the X Prize look like?

Diamandis: We are looking at a wide range of X Prize ideas that we're excited about. I'll just name a few.

One in the life sciences area that I'm very interested about is an artificial intelligence physician, called an AI Physician X Prize, and this would be for design of an AI physician that can speak and listen in natural language and can diagnose a patient as good or better than a panel of 10 board certified doctors. It's a very measurable, objective test. And it's An X Prize that Ray Kurzweil and I have worked on defining together, and one that we're looking for a benefactor or corporate sponsor to underwrite.

My reaction to prizes is that they are often a bad way of getting paid for your efforts. For prizes to work as well as they claim, there must be a whole bunch of people with the exact opposite attitude - who are probably mostly getting screwed by their biases in this area.

I think the recent trend towards prizes like X-prizes and Netflix and Executable Papers is pretty interesting. I agree that it's a bad way to get paid, and for Netflix at least it probably represented a great low-cost method of getting R&D done.

However, there's obviously a lot more going on than that. In some spheres, academia being one, money isn't really the object. Fame and prestige are more important. Winning or sharing a prize like Netflix would probably be worth more in academic career development terms than the cash itself, at least if you're starting from a comfortable financial position.

So I like to think of the trend towards prizes as being a small step on the way towards a post-scarcity world. In the future, if things work out well, no-one will really need money for food and banal things like that. People who produce creative goods will be competing purely for recognition. Iain Banks writes SF about this, as do many others of course.

A single-shot prize does not create ongoing 'runaway' incentives to improve. Once it is clear who will get the prize, everyone else goes home.

A regularly scheduled civil-discourse tournament has no finish line, so there is always a reason to continue becoming stronger.

How is this different from the competition between startups, some of whom hire people to perform a given task, and some of whom hire programmers to automate the same task?