Really the Thomason article is emblematic of the problem. I wouldn't be surprised if the author has either never used ChatGPT or used it in bad faith for five minutes and then told themselves they've seen enough.
Is the insinuation here that if the author had more personal experience with ChatGPT they would consider it more capable of sapience? It is precisely because the illusion of sapience is so robust that we need reminding of the statistical algorithm driving the model. There's no personality, no judgment, no awareness. When exactly would this awarenes...
I'm at a loss for how you read rudeness, superiority, bad faith, or deceit in my comment. It was an earnest question based on my understanding of LLMs, which I recognize is limited. I was trying to frame the problem as I understand it as a software developer. LLMs are still software, which makes it useful to think about the inputs, outputs, and state of the data in concrete terms. The way I asked it was an attempt to find out what I'm missing and/or highlight the issues that I see in your post.
I'm new to this forum, and it seems like there is a conse... (read more)