Wiki Contributions

Comments

Thanks for your answer. Would it be fair to say that both of them are oversimplifying the other's position and that they are both, to some extent, right?

I think I understand, thank you. For reference, this is the tweet which sparked the question: https://twitter.com/RichardMCNgo/status/1735571667420598737

I was confused as to why you would necessarily need "understanding" and not just simple next token prediction to do what ChatGPT does

I think it does, thank you! In your model does a squirrel perform better than ChatGPT at practical problem solving simply because it was “trained” on practical problem solving examples and ChatGPT performs better on language tasks because it was trained on language? Or is there something fundamentally different between them?

I don’t really have a coherent answer to that but here it goes (before reading the spoiler): I don’t think the model understands anything about the real world because it never experienced the real world. It doesn’t understand why “a pink flying sheep” is a language construct and not something that was observed in the real world.

Reading my answer maybe we also don’t have any understanding of the real world, we have just come up with some patterns based on the qualia (tokens) that we have experienced (been trained on). Who is to say whether those patterns match to some deeper truth or not? Maybe there is a vantage point from which our “understanding” will look like hallucinations.

I have a vague feeling that I understand the second part of your answer. Not sure though. In that model of yours are the hallucinations of ChatGPT just the result of an imperfectly trained model? And can a model be trained to ever perfectly predict text?

Thanks for the answer it gave me some serious food for thought!

Okay, all of that makes sense. Could this mean that the model didn’t learn anything about the real world, but it learned something about the patterns of words which give it thimbs up from the RLHFers?

Thanks for the detailed answer! I think that helped

Does the following make sense:

We use language to talk about events and objects (could be emotions, trees, etc). Since those are things that we have observed, our language will have some patterns that are related to the patterns of the world. However, the patterns in the language are not a perfect representation of the patterns in the world (we can talk about things falling away from our planet, we can talk about fire which consumes heat instead of producing it, etc). An LLM trained on text then learns the patterns of the language but not the patterns of the world. Its "world" is only language, and that's the only thing it can learn about. 

Does the above sound true? What are the problems with it?

I am ignoring your point that neural networks can be trained on a host of other things since there is little discussion around whether or not Midjourney "understands' the images it is generating. However, the same point should apply to other modalities as well

I love the idea that petertodd and Leilan are somehow interrelated with the archetypes of the trickster and the mother goddess inside GPT's internals. I would love to see some work done in discovering other such prototypes, and weird seemingly-random tokens that correlate with them. Thigs like the Sun God, a great evil snake, a prophet seem to pop up in religions all over the place, so why not inside GPT as well?

On the bright side Connor Leahy from Conjecture is going to be at the summit so there will be at least one strong voice for existential risk present there

Load More