Wiki Contributions

Comments

> maybe Gary Marcus-esque analysis of the pattern of LLM mistakes?

That is good. Can you recommend one?

Thank you. 

> It's unclear if this implies fundamental differences in how they work versus different specializations.

Correct. That article  argues that LLMs are more powerful than humans in this skill, but not that they have different (implicit) goal functions or that their cognitive architecture is deeply different from the human.


> Like, obviously it's gonna be alien in some ways and human-like in other ways. Right
It has been said that since LLMs predict human output, they will, if sufficiently improved, be quite human-- that they will behave in a quite human way.  

> Can you say more about what you mean by "Where can I find a post
As part of a counterargument to that, we could find evidence that their logical structure is quite different from humans. I'd like to see such a write-up. 

> Surely you would agree that if we were to do a cluster analysis of the cognition of all humans alive today + all LLMs, we'd end up with two distinct clusters (the LLMs and then humanity) right?

I agree, but I'd like to see some article or post arguing that.

Thank you. But being manipulative, silly, sycophantic, or nasty is pretty human. I am looking for hints of a fundamentally different cognitive architecture

Yes, I agree with your point about most journalists. Still, I think well enough of the professors and AI developers that I mentioned to imagine that they would have a more positive attitude.

He is also maybe just somewhat annoying to work with and interact with

I have heard that elsewhere as well. Still, I don't really see that myself, whether in his public posting or in my limited interactions with him. He can be rough and on rare occasion has said things that could be considered personally disrespectful, but I didn't think that people were that delicate.

...  advocating for a total stop... which might make AI lab... less likely to pay attention to him. 

True, but I had thought better of those people. I would have thought that they could take criticism, especially from someone who inspired some of them into their career.

Thank you for your answers. Overall, I think you are right, though I don't quite understand.

At the time of Hofstadter's Singularity Summit talk , I wondered why he wasn't "getting with the program", and it became clear he was a mysterian:  He believed -- without being a dualist --  that some things, like the mind, are ultimately, basically, essentially, impossible to understand or describe.

This 2023 interview shows that the new generation of AI has done more than chagne his mind about the potential of AI: it has struck at the core of his mysterianism

the human mind is not so mysterious and complex and impenetrably complex as I imagined it was when I was writing Gödel, Escher, Bach and writing I Am a Strange Loop.

Correct. That is why the original Turing Test is a sufficient-but-not-necessary test: It is meant to identify an AI that is definitively above human level.

What about a Turing Test variant in which such inquiries are banned?

Load More