Posts

Sorted by New

Wiki Contributions

Comments

techy1y10

I don't how the analogy with humans help. We don't know the "mechanism" behind how the human mind works. That's not the same as LLMs. We exactly know the mechanism of how it works or produces the output. And the mechanism is no different than what it has been trained to do, i.e. predict the next word. There isn't any other mysterious mechanism at work during inference.

As for plan, it doesn't have any plan. There's no "memory" for it to store a plan. It's just a big complex function that takes an input and produces an output which is the next word. And then repeats the process over and over until it's done