Sorted by New

Wiki Contributions


If someone notices that morgage defaults are uncorrelated, and new instruments that exploit that lack of correlation are invented, investors will be more likely to buy securitized morgages, so the rates are bid down and they become cheaper for homeowners. But that in itself should not cause defaults to be more correlated.

In the event, the models for correlation were off, and defaults were more correlated from the start. But that's not a problem with public/private knowledge or anything; they were just bad models. If the prizing had been done correctly to start with, I see nothing paradoxical in assuming that introducing CDOs would lead to persistently lower morgage rates -- the lack of correlation doesn't stop being real just because people believe in it.

"If you look at the rules for Conway's Game of Life (which is Turing-complete, so we can embed arbitrary computable physics in there), then the rules are really very simple. Cells with three living neighbors stay alive; cells with two neighbors stay the same, all other cells die. There isn't anything in there about only innocent people not being horribly tortured for indefinite periods."

While I of course I agree with the general sentiment of the post, I don't think this argument works. There is a relevant quote by John McCarthy:

"In the 1950s I thought that the smallest possible (symbol-state product) universal Turing machine would tell something about the nature of computation. Unfortunately, it didn't. Instead as simpler universal machines were discovered, the proofs that they were universal became more elaborate, and do did the encodings of information." (

One might add that the existence of minimalistic universal machines tell us very little about the nature of metaphysics and morality also. The problem is that the encodings of information gets very elaborate: a sentient being implemented in Life would presumably take terabytes of initial state, and that state would be encoding some complicated rules for processing information, making inferences, etc etc. It is those rules that you need to look at to determine if the universe is perfectly unfair or not.

Who knows, perhaps there is a deep fundamental fact that it is not possible to implement sentient beings in a universe where the evaluation rules don't enforce fairness. Or, slightly more plausible, it could be impossible to implement sentient tyrants who don't feel a "shade of gloom" when considering what they've done.

Neither scenario sounds very plausible, of course. But in order to tell whether such fairness constraints exist or not, the 3 rules of Life itself are completely irrelevant. This can be easily seen, since the same higher level rules could be implemented on top of any other universal machine equally easily. So invoking them do not give us any more information.