For example, using a chat/text-based service such as Discord or Slack to field questions from Users, who would then attempt to discern whether they're interacting with a Human or an AI.

  • What strategies would you employ?
  • What kind of questions would you anticipate?
  • Do you think such an exercise could prove beneficial for those involved in the AI and AI alignment fields?
  • How many random participants do you believe you could convince that you are not an AI?
  • What about AI researchers? How many of them do you think you could persuade?
New Answer
New Comment
6 comments, sorted by Click to highlight new comments since: Today at 5:02 PM

See https://www.humanornot.ai/ , and its unofficial successor, https://www.turingtestchat.com/ . I've determined that I'm largely unable to tell whether I'm talking to a human or a bot within two minutes. :/

Tried it a bit, and this doesn't seem like a test that measures what we care about because the humans (at least some of them) are trying to fool you into thinking they're bots. Consequently, even if you have a question that would immediately and reliably tell a human-honestly-trying-to-answer apart from a bot, you can't with the game with this because humans won't play along.

To make this meaningful, all human players should be trying to make others think they're human.

Absolutely, for such tests to be effective, all participants would need to try to genuinely act as Humans. The XP system introduced by the site is a smart approach to encourage "correct" participation. However, there might be more effective incentive structures to consider?

For instance, advanced AI or AGI systems could leverage platforms like these to discern tactics and behaviors that make them more convincingly Human.  If these AI or AGI entities are highly motivated to learn this information and have the funds, they could even pay Human participants to ensure honest and genuine interaction. These AI or AGI could then use this data to learn more useful and effective tactics to be able to pass as Humans (at least in certain scenarios).

See also https://clip.cafe/blade-runner-1982/that-voight-kampf-test-of-yours/

  • What about AI researchers? How many of them do you think you could persuade?

If they were motivated to get it right and we weren't in a huge rush, close to 100%. Current-gen LLMs are amazingly good compared to what we had a few years ago, but (unless the cutting edge ones are much better than I realise) they would still be easily unmasked by a motivated expert. So I shouldn't need to employ a clever strategy of my own -- just pass the humanity tests set by the expert.

  • How many random participants do you believe you could convince that you are not an AI?

This is much harder to estimate and might depend greatly on the constraints on the 'random' selection. (Presumably we're not randomly sampling from literally everyone.)

In the pre-GPT era, there were occasional claims that some shitty chatbot had passed the Turing test. (Eugene Goostman is the one that immediately comes to mind.) Unless the results were all completely fake/rigged, this suggests that non-experts are sometimes very bad at determining humanity via text conversation. So in this case my own strategy would be important, as I couldn't rely on the judges to ask the right questions or even to draw the right inferences from my responses.

If the judges were drawn from a broad enough pool to include many people with little-to-no experience interacting with GPT and its ilk, I couldn't rely on pinpointing the most obvious LLM weaknesses and demonstrating that I don't share them. (Depending on the structure of the test, I could perhaps talk the judges through the best way to unmask the bot. But that seems to go against the spirit of the question.) Honestly, off the top of my head I really don't know what would best convince the average person of my humanity via a text channel, and I wouldn't be very confident of success. 

(I'm assuming here that my AI counterpart(s) would be set up to make a serious attempt at passing the Turing test; obviously the current public versions are much too eager to give away their true identities.)

just pass the humanity tests set by the expert

What type of "humanity tests" would you expect an AI expert would employ? 

 

many people with little-to-no experience interacting with GPT and its ilk, I could rely on pinpointing the most obvious LLM weaknesses and demonstrating that I don't share them 

Yes, I suppose much of this is predicated on the person conducting the test knowing a lot about how current AI systems would normally answer questions? So, to convince the tester that you are an Human you could say something like.. "An AI would answer like X, but I am not an AI so I will answer like Y."?