If any of the major AI labs (OpenAI, Anthropic, DeepMind, xAI, etc.) wanted to make a model that could pass an "adversarial" Turing test (like the Kurzweil/Kapor one), they could probably have trained one by now. The thing is, they're not going to profit from training such a model at all, which is why none of them have done it.
If any of the major AI labs (OpenAI, Anthropic, DeepMind, xAI, etc.) wanted to make a model that could pass an "adversarial" Turing test (like the Kurzweil/Kapor one), they could probably have trained one by now. The thing is, they're not going to profit from training such a model at all, which is why none of them have done it.