I feel like I've long underappreciated the importance of introspectability in information & prediction systems.

Say you have a system that produces interesting probabilities for various statements. The value that an agent gets from them is not directly correlating to the accuracy of these probabilities, but rather to the expected utility gain they get after using information of these probabilities in corresponding Bayesian-approximating updates. Perhaps more directly, something related to the difference between one's prior and posterior after updated

... (read more)
5jacobjacob4moIn some sense, markets have a particular built-in interpretability: for any trade, someone made that trade, and so there is at least one person who can explain it. And any larger market move is just a combination of such smaller trades. This is different from things like the huge recommender algorithms running YouTube, where it is not the case that for each recommendation, there is someone who understands that recommendation. However, the above argument fails in more nuanced cases: * Just because for every trade there's someone who can explain it, doesn't mean that there is a particular single person who can explain all trades * Some trades might be made by black-box algorithms * There can be weird "beauty contest" dynamics where two people do something only because the other person did it

Good point, though I think the "more nuanced cases" are very common cases.

The 2010 flash crash seems relevant; it seems like it was caused by chaotic feedback loops with algorithmic components, that as a whole, are very difficult to understand. While that example was particularly algorithmic-induced, other examples also could come from very complex combinations of trades between many players, and when one agent attempts to debug what happened, most of the traders won't even be available or willing to explain their parts.

The 2007-2008 crisis may have been s

... (read more)

ozziegooen's Shortform

by ozziegooen 31st Aug 2019127 comments

17