One interesting case where this theorem doesn't apply would be if there are only finitely many possible outcomes. This is physically plausible: consider multiplying the maximum data density¹ by the spacetime hypervolume of your future light cone from now until the heat death of the universe.
Is Science Maniac Verrez a real series, for which HJPEV was named? Or was it invented for glowfic, with the causation going the other way?
Relatedly, are Thellim or Keltham based on anyone you knew? (or for that matter on celebrities, or characters from fiction written in dath ilan?)
I don't see how to do that, especially given that it's not a matter of meeting some threshold, but rather of maximizing a value that can grow arbitrarily.
Actually, you don't even need the ways-to-arrange argument. Suppose I want to predict/control the value of a particular nonnegative integer n (the number of cubbyholes), with monotonically increasing utility, e.g. U(n)=n. Then the encoding length E(n) of a given outcome must be longer than the code length for each greater outcome: E(n)>E(n+k). However, code len... (read more)
What if I want greebles?
To misuse localdeity's example, suppose I want to build a wall with as many cubbyholes as possible, so that I can store my pigeons in them. In comparison to a blank wall, each hole makes the wall more complex, since there are more ways to arrange n+1 holes than to arrange n holes (assuming the wall can accommodate arbitrarily many holes).