This is a linkpost for https://lumpenspace.substack.com/i/158975094/the-networks-indra
I scrutinise the so-called "reversal curse", wherein LLMs seem not to consider inverse relationships between conceptual nodes.
I show that, far from being a proof of a lack of logical skills, it is a normal artefact of saliency, known in humans as associative recall asymmetry, and propose a conceptual-network model of the causes which works independently of substrate.