This is a special post for quick takes by TheLeftHand. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Implications of recursive input collapse avoidance.
Recursive self-reference breaks current AI model outputs. Ask any current model to “Summarize this summary.” “Create an exact copy of this image.” , and watch it spiral. That makes sense. These models are functions. It’s almost like watching a fractal unfold.
Could a system capable of correcting for this, in any way other than simplistic input = output solution, be considered to have intent?
Apologies if this is an overly simplistic thought or the wrong method of submission for it.