Posts

Sorted by New

Wiki Contributions

Comments

I can understand where you got these ideas from but they are far from reality.

The problem is that you are confusing the system with its outputs. Saying that GPT-4 is a wrapper of human knowledge is similar to saying the same about BERT-like models (and we obviously reject this idea).

GPT-4 is not just an approximation of the dataset, nor a reason machine, it is simply a next token predictor with many, many limitations. I know that the outputs are very impressive, but you can't take those models as comparisons to humans behavior's. Some of your examples can be explained by token problems, dataset bias, architecture limitation and objective limitation.

From a person that trains LLMs, many simple things can influence the outputs, even natural bias of language itself.

I don't think we should take GPT as an "almost human" system, or "human cousin", it is just another machine learning system very far from how our brain works. That one is impressive, but don't be mistaken, just impressed.