Do Transformers Habituate? Investigating Repetition Suppression in Language Models
Habituation, the progressive weakening of neural responses to repeated stimuli, is fundamental to biological intelligence, enabling efficient information processing by filtering redundancy. Objective of this post is to report the insights I found looking for such mechanism within LLMs. Understanding habituation in transformers matters because: * It reveals whether AI...