Wiki Contributions

Comments

Yes, I agree with your point about most journalists. Still, I think well enough of the professors and AI developers that I mentioned to imagine that they would have a more positive attitude.

He is also maybe just somewhat annoying to work with and interact with

I have heard that elsewhere as well. Still, I don't really see that myself, whether in his public posting or in my limited interactions with him. He can be rough and on rare occasion has said things that could be considered personally disrespectful, but I didn't think that people were that delicate.

...  advocating for a total stop... which might make AI lab... less likely to pay attention to him. 

True, but I had thought better of those people. I would have thought that they could take criticism, especially from someone who inspired some of them into their career.

Thank you for your answers. Overall, I think you are right, though I don't quite understand.

At the time of Hofstadter's Singularity Summit talk , I wondered why he wasn't "getting with the program", and it became clear he was a mysterian:  He believed -- without being a dualist --  that some things, like the mind, are ultimately, basically, essentially, impossible to understand or describe.

This 2023 interview shows that the new generation of AI has done more than chagne his mind about the potential of AI: it has struck at the core of his mysterianism

the human mind is not so mysterious and complex and impenetrably complex as I imagined it was when I was writing Gödel, Escher, Bach and writing I Am a Strange Loop.

Correct. That is why the original Turing Test is a sufficient-but-not-necessary test: It is meant to identify an AI that is definitively above human level.

What about a Turing Test variant in which such inquiries are banned?

That would be possible. Plenty of people don't know much about this topic. If you had such a judge, do you think actually doing a  Turing Test (or some variant) for ChatGPT is a good ideaa

Nice! I am surprised we don't hear more about attempts at a Turing Test, even if it is not quite there yet.

That looks pretty close to the level of passing a Turing Test to me. So is there a way of trying a full Turing Test, or something like it, perhaps building on the direction you show here?

 Do you think there is a place for a Turing-like test that  determines how close to human intelligence it is, even if it has not reached that level?

ChatGPT isn't at that level.

That could well be. Do you think there is a place for a partial Turing Test as in the Loebner Prize -- to determine how close to human intelligence it is, even if it has not reached that level?

Load More