Apologies if this is answered elsewhere and I couldn't find it. In AI reading I come across an agent's utility function, , mapping world-states to real numbers.
The existence of is justified by the VNM-utility theorem. The first axiom required for VNM utility is 'Completeness' -- in the context of AI this means for every pair of world-states, and , the agent knows or ~ .
Completeness over world-states seems like a huge assumption. Every agent we make this assumption for must already have the tools to compare 'world where, all else equal, the only food is peach ice cream' v. 'world where, all else equal, Shakespeare never existed.'* I have no idea how I'd reliably make that comparison as a human, and that's a far cry from '~', being indifferent between the options.
Am I missing something that makes the completeness assumption reasonable? Is 'world-state' used loosely, referring to a point in a vastly smaller space, with the exact space never being specified? Essentially, I'm confused, can anyone help me out?
*if it's important I can try to cook up better-defined difficult comparisons. 'all else equal' is totally under-specified... where does the ice cream come from?