New to LessWrong?

New Comment
7 comments, sorted by Click to highlight new comments since: Today at 10:07 PM

Ok yes. My principle thought after the 2008 emergency was without a doubt, "I can hardly wait until next time, when ingenious machines have overleveraged the whole economy quicker and more proficiently than those Goldman and AIG folks actually could." plasmacuttershub

I may be going too far afield here but did anyone else notice the part where the author says about AI that it can’t recognize uncertainty, so it ignores it, which brought to mind the recent self driving crashes where an unexpected event causes a crash, so while a human driver says whoa, uncertainty, I’m slowing down while I try to figure out what this other driver is up to, the AI at this point says, I don’t know what it is so it doesn’t exist. Seems like some postings recently stating that algos only know what they’re told and that is a big hurdle for the aforementioned masters of the tech universe. bestazy

Ah yes. My main thought after the 2008 crisis was surely, “I can’t wait until next time, when super-intelligent machines have overleveraged the entire economy faster and more efficiently than those Goldman and AIG guys ever could.” JonsG

While not addressing the question of a role for AI I often find myself thinking we should get away for the frequent trading of financial assets and make them a bit more like the trading of mutual funds. Does all the intra-day trades really give more information or just add noise and the opportunity for the insiders to make money off retail (and even some institutional) investors?

Seem like designing the market to work a bit more like the one often used in the Econ 101 theory -- that Walrasian Auctioneer -- we could have more stable markets that do better at pricing capital assets than today. In other words, take all the order flow see that the prices are to clear and then all trade occurs at that price.

I suspect you'd still see some gaming the system with fake orders (a bit like the algos have been accused of in today's markets) but all systems get gamed.

This would have the consequence that if you see that XXXX is trading at $Y, phone up your broker and ask to sell your holding in $XXXX, it could very well end up selling at $Y/2.

That's a thing that can happen already, but the delay between saying "sell" and actually selling is typically measured in seconds rather than hours, which makes big divergences like that less likely.

Of course you can avoid this by not saying "please sell 100 shares of XXXX" but "please sell 100 shares of XXXX unless the price drops below $Z, in which case don't". But this is more complexity than most retail investors want to handle :-).

Wasn’t the crash of 1987 at least partially attributed to “program trading” that was telling everyone to sell at once?

Introduction:

Artificial intelligence (AI) is useful for optimally controlling an existing system, one with clearly understood risks. It excels at pattern matching and control mechanisms. Given enough observations and a strong signal, it can identify deep dynamic structures much more robustly than any human can and is far superior in areas that require the statistical evaluation of large quantities of data. It can do so without human intervention.

We can leave an AI machine in the day-to-day charge of such a system, automatically self-correcting and learning from mistakes and meeting the objectives of its human masters.

This means that risk management and micro-prudential supervision are well suited for AI. The underlying technical issues are clearly defined, as are both the high- and low-level objectives.

However, the very same qualities that make AI so useful for the micro-prudential authorities are also why it could destabilise the financial system and increase systemic risk, as discussed in Danielsson et al. (2017).

Conclusion:

Artificial intelligence is useful in preventing historical failures from repeating and will increasingly take over financial supervision and risk management functions. We get more coherent rules and automatic compliance, all with much lower costs than current arrangements. The main obstacle is political and social, not technological.

From the point of view of financial stability, the opposite conclusion holds.

We may miss out on the most dangerous type of risk-taking. Even worse, AI can make it easier to game the system. There may be no solutions to this, whatever the future trajectory of technology. The computational problem facing an AI engine will always be much higher than that of those who seek to undermine it, not the least because of endogenous complexity.

Meanwhile, the very formality and efficiency of the risk management/supervisory machine also increases homogeneity in belief and response, further amplifying pro-cyclicality and systemic risk.

The end result of the use of AI for managing financial risk and supervision is likely to be lower volatility but fatter tails; that is, lower day-to-day risk but more systemic risk.