Wiki Contributions

Comments

Sorted by

Yep! That's something that I wrote in my original writeup:

Even when it claims to do so, [ChatGPT] doesn’t consult a datasheet or look up information — it’s not even connected to the internet! Therefore, what seems like “reasoning” is really pattern recognition and extrapolation, providing what is most likely to be the case based on training data. This explains its failures in well-defined problem spaces: statistically likely extrapolation becomes wholly untrue when conditioned on narrow queries.  

My last comment about "self-awareness seems to be 100%" was a (perhaps non-obvious) joke; mainly that at least it is trained to recommend that it shouldn't be trusted blindly. But even this is a conclusion that isn't arrived at via "awareness" or "reasoning" in the traditional sense — again, it's just training data and machine learning.