The default in literature on prediction markets and decision markets is to expect that resolutions should be real world events instead of probabilistic estimates by experts. For instance, people would predict "What will the GDP of the US in 2025 be?”, and that would be scored using the future “GDP of the US.” Let’s call these empirical resolutions.
These resolutions have a few nice properties:
Here is another point by @jacobjacob, which I'm copying here in order for it not to be lost in the mists of time:
Though just realised this has some problems if you expected predictors to be better than the evaluators: e.g. they’re like “one the event happens everjacobyone will see I was right, but up until then no one will believe me, so I’ll just lose points by predicting against the evaluators” (edited)
Maybe in that case you could eventually also score the evaluators based on the final outcome… or kind of re-compensate people who were wronged the first time…