I don't claim that this is canonical, but here's the way David and I use the terms amongst ourselves.
First, we don't usually use the term "representation theorem" at all, but if we did, that would naturally refer to a theorem saying that some preferences/behavior/etc can be represented in a particular way, like e.g. expected utility maximization over some particular states/actions/whatever. We would probably classify e.g. VNM as a representation theorem, though we basically-never think about VNM at all so we don't really need a term for it.
Second, coherence. When we talk about coherence theorems, we usually don't think about exploitability, but rather about pareto suboptimality. (Of course exploitability is a special case of pareto suboptimality, but the reverse doesn't always apply easily.) Vibes-wise, we're usually thinking about a system behaving pareto-optimally across different places - e.g. different parts of the system behaving jointly-pareto-optimally, or choices made across different inputs/worlds being jointly-pareto-optimal, or decisions at different times being jointly-pareto-optimal. The behavior is "coherent" in the sense that the system at all these different parts/places/times all "act in a consistent way", such that they're jointly pareto-optimal. That's the sort of thing which "coherence" gestures at in our usage.
a theorem saying that some preferences/behavior/etc can be represented in a particular way, like e.g. expected utility maximization over some particular states/actions/whatever
So, I take it that Savage's theorem is a representation theorem under your schema?
Of course exploitability is a special case of pareto suboptimality, but the reverse doesn't always apply easily
Theoretically or practically? I.e. you can't derive an exploitability result easily from a parto suboptimality? Or you're IRL stuck in an (inadequate) equllibrium far from the pareto fron...
I'd previously worked through a dozen or so chapters of the same Woit textbook you've linked as context for Representation Theory.
Given some group , a (limear) "representation" is a homomorphism from G into the GL(V) the general linear group of some vector space.
That is, a map is a representation iff for all elements ,.
Does "preferences between deals dependant on unknown world states" have a group structure? If not it cannot be a representation in the sense meant by Woit.
I can see how this could be confusing, but in mathematics, the phrase "representation theorem" is not specifically about "representation theory". Wikipedia's definition is quite broad:
In mathematics, a representation theorem is a theorem that states that every abstract structure with certain properties is isomorphic to another (abstract or concrete) structure.
The list of examples it gives is probably more useful.
(Adding to the confusion: a famous example of a representation theorem is a corollary of Cayley's Theorem: for every group there is a...
TL;DR Is a coherence theorem anything that says "if you aren't coherent in some way you predictably have to forgo some sort of resource or be exploitable in some way" and a representation theorem anything that says "rational cognitive structures can be represented by some variant of expected utility maximization?" Is there no difference? One a subset of another? Some secret fourth thing?
Just today, I was arguing that Savage's subjective expected utility model should be called a representation theorem, as wikipedia claims, for an article my co-worker was writing, as opposed to a coherence theorem. My opponent, taking the bold stance that Wikipedia may not be the end all and be all of the discussion, and that he wasn't sold on it being a representation theorem in spite of the fact that you're representing one structure (preferences between deals dependant on unknown world states) using another (subjective expected utility), as in representation theory.
Unwilling to accept this lack of decisive resolution, I turned to that other infallible oracle, Stampy. (Stampy's received some hefty upgrades recently, so it's even more infallible than before!) Stampy demured, and informed me that the literature on AI safety doesn't clearly distinguish between the two.
Undaunted, I delved into the literature myself and I found the formerly rightful caliph (sort of) opining on this very topic.
Returning to my meddlesome peer, I did spake unto him that savage's representation theorem was not a coherence theorem, for there was no mention of a exploitability, as in the Dutch Book theorems. Rather, Savage's theorem was akin to Von Neuman and Morgenstern's.
But alas! He did not yield and spake unto me: "IDK man, seems like there's not a consensus and this is too in the weeds for a beginner's article". Or something. I forget. So now I must ask you to adjudicate, dear readers. What the heck is the difference between a coherence theorem and a representation theorem?