Earlier today I was reading this post about the rationalist community's limited success betting on bitcoin and thinking about how the singularity is going to be the ultimate test of the rationalist community's ability to translate their unusual perspectives into wealth and influence.
There needs to be some default community advice here for people who believe that we're likely to create AGI in our lifetimes but don't know how to prepare for it. I think it would an absolute shame if we missed opportunities to invest in the singularity the same way we missed opportunities to invest in Bitcoin (even though this community was clued in to crypto from a very early stage). I don't want to read some retrospective about how only 9% of readers made $1000 or more from the most important even in human history even though we were clued in to the promise and peril of AGI decades before the rest of the world.
John_Maxwell made a post about this last year along the same lines. but i'd like to expand on what he wrote.
Why is this important?
In addition to the obvious benefit of everyone getting rich, I think there are several other reasons coming up with a standard set of community advice is important.
Betting on the eventual takeover of the entire world economy by AI is not yet a fashionable bet. But like Bitcoin, betting on AGI will inevitably become a very fashionable bet in the next few decades as first early adopters buy in, and then it becomes standard part of financial advice given out by investment professionals.
In these early days, I think there is an opportunity for us to set the standard for how this type of investment is done. This should include not just a clear idea of how to invest in AGI's creation (via certain companies, ETFs, AI focused SPACs etc), but also what should NOT be done.
For example, the community advice should probably advise against investing in companies without a strong AI alignment team, as capitalizing such companies will increase the likelihood that AI will destroy you and everything you love. We may also want to discourage investment in companies that don't have a clause on how they plan to deal with race conditions that could compromise safety. There are probably other considerations we should make that I am not thinking of. OpenAI's charter seems like a pretty well thought-out set of guidelines for AGI creation. This site has a very healthy community of AI safety researchers whose advice on this topic I would very much appreciate.
Whatever the advice is, I think it's important that it subsidizes good behavior without diminishing expected returns too much. If we advise against investing in the organization that looks most likely to create AGI because they don't meet some safety standard, we run the risk of people ignoring the advice.
There is some small chance that if these guidelines are well thought out they could eventually be adopted by investment companies or even governments. BlackRock, an investment management corporation with $8.67 trillion under management, has begun to divest from fossil fuels in the interest of attracting money from organizations concerned about climate change. If the public comes to see unaligned AI as a threat at some point in the future, existing guidelines already adopted by other investors or financial institutions could become a easy thing for investment managers to adopt so they can say they are "being proactive" about the risk from AI.
What would this advice look like?
Let's reflect on some of the lessons learned from crypto craze. Here I will quote from several posts I've read.
A hindsight solution for rationalists to have reduced the setup costs of buying bitcoin would have been either to have had a Rationalist mining pool or arrange to have a few people buy in bulk and serve as points of distribution within the community.
This suggests that if a future opportunity appears to be worth the risk of investment, but has some barrier to entry that is individually costly but collectively trivial, we ought to work first to eliminate that barrier to entry, and then allow the community to evaluate more dispassionately on risk and return alone.
I think lowering the barrier to doing something is a great idea but it's hard to know exactly what that would look like. Could we create our own ETF? Would it be best to create a list of stocks of companies that are both likely to create AGI and have good incentive structures set up to make proper alignment more likely? I think ideally there would be tiers of actions people could take depending on how much effort they wanted to expend, where the lowest tier with the least action would be "Set up a TD Ameritrade account and buy this ETF" or something and the most complicated would be "here is a summary for each company widely regarded by members of the AI alignment forum to have good alignment plans and here's a link to some resources to learn about them."
Is there really an opportunity here? Why would we expect to beat the market in this situation?
The answer to this is more complicated and I'm sure other people probably have better answers to this than me. But I'll give it a shot.
I realize that saying this sounds very unacademic, but the creation of AGI will be the most important moment in the history of life so far. If AGI does not destroy us or torture us or pump our brains full of dopamine for eternity, it will have transformative effects on the economy the likes of which we have never seen. It's plausible that worldwide GDP growth could accelerate by 10x or more. A well aligned AI is a wish-granting machine whose limitations are the as-of-yet-incomplete laws of physics.
Think about how nuts this sounds to the average hedge fund manager. They have no point of reference for AGI. It pattern matches to happy magical fairy tale or moral fables from children's storybooks. It doesn't sound real. And I would bet the prospect of ridicule has prevented the few who actually buy the idea from bringing it up with investors. If you listen to interviews with top people from JP Morgan or Goldman Sachs they use the same language to refer to AI as they use to refer to everything else in their investment portfolio. There's nothing to signal that this is fundamentally different from biotech or clean energy or SAAS products.
With such communication and conceptual barriers, why would we expect assets to be priced properly?
I'd welcome feedback here. Maybe I'm missing something or maybe I've been listening to the wrong subset of the investment community. But my overwhelming impression is that almost no one on Wallstreet or anywhere else truly buys into the vision of AGI as the last invention humans will ever make.
Here's my current strategy and why I find it unsatisfying
Earlier this year I got sick of not betting on my actual beliefs, and put about $10k into Google, Microsoft and Facebook in proportion to the number of publications they each made in NeurIPS and ICML over the last two years, treating publication count as a proxy for the likelihood that each company would create the first AGI. I would have put in more but I don't have much more.
Though I think this is better than nothing, I can't help but think there must be a better, more targeted way to bet on AGI specifically. For example, I don't really care that much about Google's search business, but I am forced to buy it when I buy Google stock.
This strategy also neglects all small companies. I think there is a low enough level of hardware overhang right now that it is overwhelmingly likely AGI will be created in one or more big research labs. But perhaps the final critical piece of the puzzle will come from some startup that gets acquired by Microsoft AI labs and owning a piece of that startup will result in dramatically higher returns than buying the parent company directly.
Unfortunately accredited investor laws literally make it illegal to invest in early-stage startups unless you're already rich. So all the rapid growth from early-stage startups is forever out of reach for people who aren't already rich. (By the way, these laws are one of the reasons private equity has averaged about double the returns of the S&P 500 over the last 30 years. Rich people have a monopoly on startup equity). SPACs are kind of a backdoor to getting into early-stage startups without a lot of money, but companies have to agree to merge with a SPAC so your options are still somewhat limited. However I think the SPAC strategy is worth looking into.
You could always buy an AI ETF. I'll be honest and say I haven't really looked into that much but would appreciate feedback from anyone that has.
Anyways, those are my thoughts on this subject. Let me know what you think.