Posts

Sorted by New

Wiki Contributions

Comments

Kipler3y10

I find your argument interesting but I don't understand how it applies for the GPT-n family.

From my understanding GPT-3 (the only one I really have read about) is merely a probabilistic language construction algorithm. In other words you feed it a sequence of words and it tries to guess the most likely word that follows. It guesses that from all of the texts it previously read during its training. However I might not have correctly understood GPT-n functioning, in that case I'd  love to get an an explanation or a link toward one.
 

On the other hand I find the idea of making an AI introspect very interesting, even if I'm not qualified enough to understand the technical implications of that.