New Comment
5 comments, sorted by Click to highlight new comments since: Today at 12:17 PM

Was the stock market good at predicting technological changes in the past?

Why should we expect the market to be better at predicting H than, say, a random LW user?
It's not like people who are currently better-than-the-market at predicting H can use that ability to immediately increase the fraction of the market that they own/control.

The obvious answer is to not take your question seriously and just point out that the market is in general better than any random person, LW user or not, at predicting the future value of assets traded on that market, unless you want to disagree with the efficient market hypothesis.

But looking closer, we should expect that someone who thinks they know better would view AI company stocks and undervalued and buy them up. In fact, such people may be doing this already and simply not making us aware of it because to do so would be to give up some of their alpha! This does suggest we may not see an impact in the market, though, not because the market isn't pricing in the information, but that there aren't enough people with the information and enough capital to move the market to fully reflect this, thus the asset remains undervalued in terms of H. This seems to be the point you are getting at.

My guess would be, though, that H is not so far-fetched that investors with sufficient capital to move the market are not aware of it and H is already priced in, thus this suggests evidence against H. People are generally aware that AI is coming, will produce large amounts of value, and some people think that large amount of value will come soon. If you disagree and believe H, then this suggests a great investment opportunity!

Unlike being better-than-the-market at predicting the value of some asset so far, being better-than-the-market at predicting H didn't help people to increase their control over the market (until they're no longer better than the market at that).

Unless we assume that [being better at predicting values of assets so far] correlates with [being better at predicting H] (compared with random LW users), I don't see why we should expect the market to be better than random LW users at predicting H.

You may be familiar with the term "Technological Singularity" as used to describe what happens in the wake of the development of superintelligent AGI; this term is not merely impressive but refers to the belief that what follows such a development would be incredibly and unpredictably transformative, subject to new phenomena and patterns of which we may not yet be able to conceive.

I don't believe it would be smart to invest with such a scenario in mind; we have little reason to believe that how much pre-Singularity wealth one has would matter post-Singularity in such a way that it would be wise to include such a term in one's expected value and decision-making. It would be not entirely unlike buying stock based on which companies would most benefit from the announcement of an incoming Earth-shattering asteroid. The development of superintelligent AGI is an existential threat to just about every institution, including the stock market and our current conception of the economy in general. A rational, entirely selfish actor or aggregate thereof does not make plans for what happens after its death.

However, I must admit that I have no data on the subject, and while I would not guess that there is much relevant data available, I imagine there is some - did the U.S. stock market account for what companies might be most successful in the case of a Soviet conquest of the U.S.? Is the potential profitability of a company in a world transformed by a global Communist revolution accounted for in its current stock price? I do not know, but I would be very surprised to learn that the stock market priced scenarios in which it and the institutions on which it depends are unlikely to continue to exist in recognizable forms.

New to LessWrong?