Although I think this series of posts is interesting and mostly very well reasoned, I find the discussion about objectivity to be strangely crafted. At the risk of arguing about definitions: the hierarchy you lay out about objectivity is only remotely related to what I mean by objective, and my sense is that it doesn't cohere very well with common usage.
First, there seems no better reason to split off objective1 than objectiveA which is "software-independent facts". Okay, so I can't say anything objective about my web browser, just because we'v... (read more)
This is an awful lot of words to expend to notice that
(1) Social interactions need to be modeled in a game-theoretic setting, not straightforward expected payoff
(2) Distributions of expected values matter. (Hint: p(N) = 1/N is a really bad model as it doesn't converge).
(3) Utility functions are neither linear nor symmetric. (Hint: extinction is not symmetric with doubling the population.)
(4) We don't actually have an agreed-upon utility function anyway; big numbers plus a not-well-agreed-on fuzzy notion is a great way to produce counterintuitive results.... (read more)
The appropriate thing to do is apply (an estimate of) Bayes rule. You don't need to try to specify every possible outcome in advance; that is hopeless and a waste of effort. Rather, you extract the information that you got about what happened to create an improved prediction of what would have happened, and assign credit appropriately.
First, let's look at what we're trying to do. If you're trying to make good predictions, you want
p(X | "X")
to be as close to 1 as possible, where X is what happens, and "X" is what you say will happ... (read more)