We need a standard set of community advice for how to financially prepare for AGI

by GeneSmith5 min read7th Jun 202141 comments

49

AICommunity
Frontpage

Earlier today I was reading this post about the rationalist community's limited success betting on bitcoin and thinking about how the singularity is going to be the ultimate test of the rationalist community's ability to translate their unusual perspectives into wealth and influence.

There needs to be some default community advice here for people who believe that we're likely to create AGI in our lifetimes but don't know how to prepare for it. I think it would an absolute shame if we missed opportunities to invest in the singularity the same way we missed opportunities to invest in Bitcoin (even though this community was clued in to crypto from a very early stage). I don't want to read some retrospective about how only 9% of readers made $1000 or more from the most important even in human history even though we were clued in to the promise and peril of AGI decades before the rest of the world.

John_Maxwell made a post about this last year along the same lines. but i'd like to expand on what he wrote.

Why is this important?

In addition to the obvious benefit of everyone getting rich, I think there are several other reasons coming up with a standard set of community advice is important.

Betting on the eventual takeover of the entire world economy by AI is not yet a fashionable bet. But like Bitcoin, betting on AGI will inevitably become a very fashionable bet in the next few decades as first early adopters buy in, and then it becomes standard part of financial advice given out by investment professionals.

In these early days, I think there is an opportunity for us to set the standard for how this type of investment is done. This should include not just a clear idea of how to invest in AGI's creation (via certain companies, ETFs, AI focused SPACs etc), but also what should NOT be done.

For example, the community advice should probably advise against investing in companies without a strong AI alignment team, as capitalizing such companies will increase the likelihood that AI will destroy you and everything you love. We may also want to discourage investment in companies that don't have a clause on how they plan to deal with race conditions that could compromise safety. There are probably other considerations we should make that I am not thinking of. OpenAI's charter seems like a pretty well thought-out set of guidelines for AGI creation. This site has a very healthy community of AI safety researchers whose advice on this topic I would very much appreciate.

Whatever the advice is, I think it's important that it subsidizes good behavior without diminishing expected returns too much. If we advise against investing in the organization that looks most likely to create AGI because they don't meet some safety standard, we run the risk of people ignoring the advice.

There is some small chance that if these guidelines are well thought out they could eventually be adopted by investment companies or even governments. BlackRock, an investment management corporation with $8.67 trillion under management, has begun to divest from fossil fuels in the interest of attracting money from organizations concerned about climate change. If the public comes to see unaligned AI as a threat at some point in the future, existing guidelines already adopted by other investors or financial institutions could become a easy thing for investment managers to adopt so they can say they are "being proactive" about the risk from AI.

What would this advice look like?

Let's reflect on some of the lessons learned from crypto craze. Here I will quote from several posts I've read.

A hindsight solution for rationalists to have reduced the setup costs of buying bitcoin would have been either to have had a Rationalist mining pool or arrange to have a few people buy in bulk and serve as points of distribution within the community.

This suggests that if a future opportunity appears to be worth the risk of investment, but has some barrier to entry that is individually costly but collectively trivial, we ought to work first to eliminate that barrier to entry, and then allow the community to evaluate more dispassionately on risk and return alone.

  • clarkey

I think lowering the barrier to doing something is a great idea but it's hard to know exactly what that would look like. Could we create our own ETF? Would it be best to create a list of stocks of companies that are both likely to create AGI and have good incentive structures set up to make proper alignment more likely? I think ideally there would be tiers of actions people could take depending on how much effort they wanted to expend, where the lowest tier with the least action would be "Set up a TD Ameritrade account and buy this ETF" or something and the most complicated would be "here is a summary for each company widely regarded by members of the AI alignment forum to have good alignment plans and here's a link to some resources to learn about them."

Is there really an opportunity here? Why would we expect to beat the market in this situation?

The answer to this is more complicated and I'm sure other people probably have better answers to this than me. But I'll give it a shot.

I realize that saying this sounds very unacademic, but the creation of AGI will be the most important moment in the history of life so far. If AGI does not destroy us or torture us or pump our brains full of dopamine for eternity, it will have transformative effects on the economy the likes of which we have never seen. It's plausible that worldwide GDP growth could accelerate by 10x or more. A well aligned AI is a wish-granting machine whose limitations are the as-of-yet-incomplete laws of physics.

Think about how nuts this sounds to the average hedge fund manager. They have no point of reference for AGI. It pattern matches to happy magical fairy tale or moral fables from children's storybooks. It doesn't sound real. And I would bet the prospect of ridicule has prevented the few who actually buy the idea from bringing it up with investors. If you listen to interviews with top people from JP Morgan or Goldman Sachs they use the same language to refer to AI as they use to refer to everything else in their investment portfolio. There's nothing to signal that this is fundamentally different from biotech or clean energy or SAAS products.

With such communication and conceptual barriers, why would we expect assets to be priced properly?

I'd welcome feedback here. Maybe I'm missing something or maybe I've been listening to the wrong subset of the investment community. But my overwhelming impression is that almost no one on Wallstreet or anywhere else truly buys into the vision of AGI as the last invention humans will ever make.

Here's my current strategy and why I find it unsatisfying

Earlier this year I got sick of not betting on my actual beliefs, and put about $10k into Google, Microsoft and Facebook in proportion to the number of publications they each made in NeurIPS and ICML over the last two years, treating publication count as a proxy for the likelihood that each company would create the first AGI. I would have put in more but I don't have much more.

Though I think this is better than nothing, I can't help but think there must be a better, more targeted way to bet on AGI specifically. For example, I don't really care that much about Google's search business, but I am forced to buy it when I buy Google stock.

This strategy also neglects all small companies. I think there is a low enough level of hardware overhang right now that it is overwhelmingly likely AGI will be created in one or more big research labs. But perhaps the final critical piece of the puzzle will come from some startup that gets acquired by Microsoft AI labs and owning a piece of that startup will result in dramatically higher returns than buying the parent company directly.

Unfortunately accredited investor laws literally make it illegal to invest in early-stage startups unless you're already rich. So all the rapid growth from early-stage startups is forever out of reach for people who aren't already rich. (By the way, these laws are one of the reasons private equity has averaged about double the returns of the S&P 500 over the last 30 years. Rich people have a monopoly on startup equity). SPACs are kind of a backdoor to getting into early-stage startups without a lot of money, but companies have to agree to merge with a SPAC so your options are still somewhat limited. However I think the SPAC strategy is worth looking into.

You could always buy an AI ETF. I'll be honest and say I haven't really looked into that much but would appreciate feedback from anyone that has.

Anyways, those are my thoughts on this subject. Let me know what you think.

49

39 comments, sorted by Highlighting new comments since Today at 1:37 AM
New Comment

One approach that feels a bit more direct is investing in semiconductor stocks. If we expect AGI to be a big deal and massively economically relevant, it seems likely that this will involve vast amounts of compute, and thus need a lot of computer chips. I believe ASML (Netherlands based) and TSMC (Taiwan based) are two of the largest semiconductor manufacturers and are publicly traded, though I'm unsure which countries let you easily invest in them.

Problems with this:

  • A bunch of their current business comes from crypto-mining, so this also has some crypto exposure. The stocks have done well over the last few years, and I believe this is mostly from the crypto boom than the AI boom
  • TSMC is based in Taiwan, and thus is exposed to Taiwan-China problems
  • This assumes AGI will require a lot of compute (which I personally believe, but YMMV)
  • It's unclear how much of the value of AGI will be captured by semiconductor manufacturers

A bunch of their current business comes from crypto-mining, so this also has some crypto exposure. The stocks have done well over the last few years, and I believe this is mostly from the crypto boom than the AI boom

This is particular problematic given the scenario of a switch away from proof of work to proof of stake, which might happen in 1-2 years and tank crypto-mining completely.

which might happen in 1-2 years and tank crypto-mining completely.

Good point. But that would be a much better time to buy in for long-term value. 

One approach that feels a bit more direct is investing in semiconductor stocks.

I agree with this and the above points. 

One way to potentially overcome the issues with TSMC might be to supplement the investment by buying into commodities like silicon and coltan. This is still not guaranteed to capture most of the value, but might be a method of diversification. But there are many ethical considerations (particularly with coltan). 

We have so much uncertainty abut pathways that I'm skeptical there is really any benefit here. If we knew enough to write such a guide, that would be great, but for reasons having nothing to do with our financial preparedness.

This seems like a very surprising claim to me. You can make money on stocks by knowing things above pure chance. Do you really think that for all stocks?

Your question ignores timeframes. I’m happy to argue that P(stock rises in the next 5 years |AGI in 20 years)≈P(stock rises in the next 5 years) for all stocks.
 

I’m a professional equity investor, and trust me, the market isn’t that forward-looking. Unless you believe in AGI within the next 10 years, I suggest ignoring it when it comes to picking investments. Because for the intermediate timeframe until the market begins to take the concept seriously, the value of your investments will be determined by all the other factors which you’re ignoring in favour of focusing on AGI, so unless you want your investment results to be meh for years-to-decades, then don‘t go for some all-out bet on AI.

The question is not if you can build a portfolio where the expected gains conditional on AGI is positive, it's whether you can get enough of an advantage that it outweighs the costs of doing so, and in expectation outperforms the obvious alternative strategy of index funds. If you're purely risk-neutral, this is somewhat easier. Otherwise, the portfolio benefits of reducing probability of losses are hard to beat.

You also may have cases where P(stock rises | AGI by date X)>>P(Stock rises), but P(stock falls | ~AGI  by date X) is high enough not to be worthwhile.

I would add that money is probably much less valuable after AGI than before, indeed practically worthless. But it's still potentially a good idea to financially prepare for AGI, because plausibly the money would arrive before AGI does and thereby allow us to e.g. make large donations to last-ditch AI safety efforts.

If you think of it less like "possibly having a lot of money post-AGI" and more like "possibly owning a share of whatever the AGIs produce post-AGI", then I can imagine scenarios where that's very good and important. It wouldn't matter in the worst scenarios or best scenarios, but it might matter in some in-between scenarios, I guess. Hard to say though ...

This is a good point, but even taking it into account I think my overall claim still stands. The scenarios where it's very important to own a larger share of the AGI-produced pie [ETA: via the mechanism of pre-existing stock ownership] are pretty unlikely IMO compared to e.g. scenarios where we all die or where all humans are given equal consideration regardless of how much stock they own, and then (separate point) also our money will probably have been better spent prior to AGI trying to improve the probability of AI going well than waiting till after AI to do stuff with the spoils.

Most rationalists are heavily invested into AGI in non-monetary ways — career paths, free time, hopes for longevity/coordination breakthroughs. As other commenters have pointed out, if humanity achieves aligned AGI in the future, financial returns will feasibly be far less important. Given that, maybe the best investments are to bet against AGI as a hedge for humanity not achieving it. 

There are 3 futures: If we achieve aligned AGI, we win the game and nothing else matters*. If we achieve misaligned AGI, we die and nothing else matters. If we fail to achieve AGI at all, then we've wasted a lot of our time, careers, and hopes. In that case, we want investments to fall back on. 

In that 3rd future, what commodities and equities are most successful? Can we buy those now?

*subject to accepting the singularity-like premise.

We should have a similar conversation [Question post?] for anticipating the consequences of transformative biotech.

Which biotech in particular?

As far as genetic engineering goes, I was thinking about writing up a post on that myself to the effect of "why you should [or should not] consider having your kids via IVF.

But I haven't done much research on transformative biohazards like engineered pandemics and am wary of writing such a post.

I was listening to Buterin on the Tim Ferriss podcast this morning, he made an offhand comment that biotech is at a similar point that computers were in the 50s; that left it salient with I read this, but in conversation and from general reading I have a sense that there's a good chance that a lot of progress is about to be unlocked in the field due to machine learning, much cheaper / higher throughput genetic sequencing and DNA/RNA/protein synthesis and much better DNA editing techniques due to CRISPR.

IBM managed to extract some value from the computer revolution but the real gains where made by countries that were founded later. 

Big Pharma companies at the moment seem more disfunctional then IBM back (there's a reason why DeepMind does protein folding prediction and Pfizer doesn't), so I would expect a good portion of the profits to be made by new biotech companies. 

As far as biotech and Buterin, VitaDAO is currently in formation. Longevity biotech DAO on the blockchain hits a lot of hip keywords. 

I would love to hear some longevity-related biotech investment advices from rationalists, which I (and presumably many others here) predict to be the second biggest deal in big picture futurism. 

The only investment idea I can come up with myself are for-profit spin-off companies from SENS Research Foundation, but that's just the obvious option to someone without expertise in the field and trusting the most vocal experts.

Although some growth potential has already been lost due to the pandemic bringing a lot of attention towards this field, I think we're still early enough to capture some of the returns.

You mean consequences not limited by possible financial gain?

My impression from skimming a few AI ETFs is that they are more or less just generic technology ETFs with different branding and a few random stocks thrown in. So they're not catastrophically worse than the baseline "Google, Microsoft and Facebook" strategy you outlined, but I don't think they're better in any real way either.

Is AGI even something that should be invested in on the free market? The nature of most financial investments is that individuals expect a return on their investment. I may be wrong, but I can't really envision a friendly AGI being created with the purpose of creating financial value for its investors. I mean, sure, technically if friendly AGI is created the investors will almost certainly benefit regardless because the world will become a better place, but this could only be considered an investment in a rather loose sense. Investing in AGI won't provide any significant returns until AGI is created, and at that point it is likely that stock ownership will not matter. 

rationalist community's limited success betting on bitcoin

Wait, what?  The sum of the net worth of those who consider themselves members of the rationalist community is MUCH greater due to crypto than it was before.  What definition of "success" are you using which so devalues that outcome?

There needs to be some default community advice here for people who believe that we're likely to create AGI in our lifetimes but don't know how to prepare for it.

Do you really want default advice?  I'd rather have correct advice, and I'd rather still have correct personal behavior, regardless of advice.  "Correct", in this case, means "best possible experienced outcome", not "sounds good" or "best prediction at this point but still wrong".

Earlier this year I got sick of not betting on my actual beliefs, and put about $10k into Google, Microsoft and Facebook in proportion to the number of publications they each made in NeurIPS and ICML over the last two years, treating publication count as a proxy for the likelihood that each company would create the first AGI.

I think this summarizes your confusion pretty well.  Stock picks aren't bets about any particular outcome.  You're not making conditional predictions about what will actually happen.  You're claiming to predict who creates the first AGI, but not trying to figure out what happens when it does.  Why would the stock go up, as opposed to the employees in control just absconding with (or being absorbed into) the AGI and the stock becoming irrelevant?  Or someone else learning from the success and turning it into an actual financial boon.  Or any of a billion other sequences that would make it a dumb idea to pick a stock based on number of papers published in a narrow topic that may or may not correlate with AGI creation.

IMO, actual best advice given what we know now is to invest in fairly broad index funds, and look for opportunities where your personal expertise can be leveraged to identify opportunities (financial and otherwise) that are much better than average.

Wait, what? The sum of the net worth of those who consider themselves members of the rationalist community is MUCH greater due to crypto than it was before. What definition of "success" are you using which so devalues that outcome?

I'm mostly referring to the narrative from this post. There have been some successes, but those have mostly been due to a very small number of huge winners. And in the case of the biggest winner of all, Vitalik Buterin, he actually ended up joining the rationalist community AFTER he started Ethereum.

Do you really want default advice? I'd rather have correct advice, and I'd rather still have correct personal behavior, regardless of advice. "Correct", in this case, means "best possible experienced outcome", not "sounds good" or "best prediction at this point but still wrong".

I probably wasn't as clear as I could have been in the original post. What I mean by "default advice" is a set of actions people can take if they believe there is a decent chance AGI will be created in their lifetimes and want to prepare for it but are not willing to spend all the time to develop a detailed personal plan.

For example, if you believe the efficient market hypothesis, you can act on that belief by buying low-cost index funds. I'm thinking it would be useful to have a similar easy option for people who buy that we will likely see AGI in our lifetimes.

Why would the stock go up, as opposed to the employees in control just absconding with (or being absorbed into) the AGI and the stock becoming irrelevant? Or someone else learning from the success and turning it into an actual financial boon. Or any of a billion other sequences that would make it a dumb idea to pick a stock based on number of papers published in a narrow topic that may or may not correlate with AGI creation.

True, and this is why I said I am not particularly satisfied with my current strategy. I still think in the scenario where AGI has been created or is close to being created, Google's stock price is likely to go up more than an index fund of all stocks on the market.

set of actions people can take if they believe there is a decent chance AGI will be created in their lifetimes and want to prepare for it but are not willing to spend all the time to develop a detailed personal plan.

I don't think that's how markets and finance works.  The actions you can take if you're not willing/able to get into the details of a personal plan are pretty much "follow the crowd".  Perhaps you can pick a crowd to follow, in the form of slightly-less-general indexes.  

For example, if you believe the efficient market hypothesis, you can act on that belief by buying low-cost index funds.

No, no, no.  Regardless of what you believe, if the EMH is true, you can't do better than buying low-cost index funds.  It's only the case where you have TRUER beliefs than the market aggregate, in ways that you can predict the market shift that will happen when your belief becomes common, that you can invest better than the average.  Without quite a bit of analysis and research, I don't think you can predict whether Google shareholders benefit or lose from AGI development any better than the market.

I think Vicarious AI is doing more AGI-relevant work than anyone. I pore over all their papers. They're private so this doesn't directly answer your question. But what bugs me is: Their investors include Good Ventures & Elon Musk ... So how do they get away with (AFAICT) doing no safety work whatsoever ...?

I think Vicarious AI is doing more AGI-relevant work than anyone

Interesting, can you say more about this/point me to any good resources on their work? I never hear about Vicarious in AI discussions

I know from some interviews I've watched that Musk's main reason for investing in AI startups is to have inside info about their progress so he can monitor what's going on. Perhaps he's just not really paying that much attention? He always has like 15 balls in the air, so perhaps he just doesn't realize how bad Vicarious's safety work is.

Come to think of it, if you or anyone you know have contact with Musk, this might be worth mentioning to him. He clearly cares about AI going well and has been willing to invest resources in increasing these odds in the past via OpenAI and then Neuralink. So perhaps he just doesn't know that Vicarious AI is being reckless when it comes to safety.

He clearly cares about AI going well and has been willing to invest resources in increasing these odds in the past via OpenAI and then Neuralink.

Both of these examples betray an extremely naive understanding of AI risk.

  • OpenAI was intended to address AI-xrisk by making the superintelligence open source. This is, IMO, not a credible way to avoid someone - probably someone in a hurry - getting a decisive strategic advantage.
  • Neuralink... I just don't see any scenario where humans have much to contribute to superintelligence, or where "merging" is even a coherent idea, etc. I'm also unenthusiastic on technical grounds.
  • SpaceX. Moving to another planet does not save you from misaligned superintelligence. (being told this is, I hear, what led Musk to his involvement in OpenAI)

So I'd attribute it to some combination of too many competing priorities, and simply misunderstanding the problem.

Moving to another planet does not save you from misaligned superintelligence.

Not only that, there is hardly any other existential risks to be avoided by Mars colonization, either.

Neuralink... I just don't see any scenario where humans have much to contribute to superintelligence, or where "merging" is even a coherent idea

The only way I can see Musk's position making sense is that it's actually a 4D chess move to crack the brain algorithm and using it to beat everyone else to AGI, and not the reasoning he usually gives in public for why Neuralink is relevant to AGI. Needless to say I am very skeptical of this hypothesis.

Not only that, there is hardly any other existential risks to be avoided by Mars colonization, either.

Let's use Toby Ord's categorisation - and ignore natural risks, since the background rate is low. Assuming a self-sustaining civilisation on Mars which could eventually resettle Earth after a disaster:

  • nuclear war - avoids accidental/fast escalation; unlikely to help in deliberate war
  • extreme climate change or environmental damage - avoids this risk entirely
  • engineered pandemics - strong mitigation
  • unaligned artificial intelligence - lol nope.
  • dystopian scenarios - unlikely to help

So Mars colonisation handles about half of these risks, and maybe 1/4 of the total magnitude of risks. It's a very expensive mitigation, but IMO still clearly worth doing even solely on X-risk grounds.

I strongly believe that nuclear war and climate change are not existential risks, by a large margin.

For engineered pandemics, I don't see why Mars would be more helpful than any other isolated pockets on Earth - do you expect there to be less exchange of people and goods between Earth and Mars than, say, North Sentinel Island?

Curiously enough, the last scenario you pointed out - dystopias - might just become my new top candidate for x-risks amenable through Mars colonization. Need to think more about it though.

It does take substantially longer to get to Mars than to get to any isolated pockets on Earth. So unless the pandemic's incubation period is longer than the journey to Mars, it's likely that Martians would know that passengers aboard the ship were infected before it arrived.

The absolute travel time matters less for disease spread in this case. It doesn't matter how long it would theoretically take to travel to North Sentinel Island if nobody is actually going there years on end. Disease won't spread to those places naturally.

And if an organization is so hell-bent on destroying humanity as to track down every last isolated pocket of human settlements on Earth (a difficult task in itself as they're obscure almost by definition) and plant the virus there, they'll most certainly have no trouble bringing it to Mars either.

I'd worry that if we're looking at a potentially civilization-ending pandemic, a would-be warlord with a handful of followers decides that north sentinel island seems a kind of attractive place to go all of a sudden.

I had always assumed that any organization trying to destroy the world with an engineered pathogen would basically release whatever they made and then hope it did its work.

IDK, this topic gets into a lot of information hazard, where I don't really want to speculate because I don't want to spread ideas for how to make the world a lot worse.

[-][anonymous]12d 0

Moving to another planet does not save you from misaligned superintelligence.

 

Depends how super. FOOM to godhood isn't the only possible path for AI.

AI doesn't need godhood to affect another planet. Simply scalling up architectures that are equal in intelligence to the smartest humans to work 1 billion times in parallel is enough. 

There's some major challenges here.

The first is trying to predict what will be a reliable store of value in a world where TAI may disrupt normal power dynamics. For example, if there's a superintelligent AI capable of unilaterally transforming all matter in your light cone into paperclips, is there any sense in which you have enough power to enforce your ownership of anything independent of such an AI? Seems like not, in which case it's very hard to know what assets you can meaningfully own that would be worth owning, let alone by what mechanisms you can meaningfully own things in such a world.

Now we might screen off bad outcomes since they don't matter to this question, but then we're still left with a lot of uncertainty. Maybe it just doesn't matter because we'll be expanding so rapidly that there's little value in existing assets (they'll be quickly dwarfed via expansion). Maybe we'll impose fairness rules that make held assets irrelevant for most things that matter to you. Maybe something else. There's a lot of uncertainty here that makes it hard to be very specific about anything beyond the run up to TAI.

We can, however, I think give some reasonable advice about the run up to TAI and what's likely to be best to have invested in just prior to TAI. Much of the advice about semiconductor equities, for example, seems to fall in this camp.

For example, if there's a superintelligent AI capable of unilaterally transforming all matter in your light cone into paperclips, is there any sense in which you have enough power to enforce your ownership of anything independent of such an AI?

No, which is why I "invest" in making bad outcomes a tiny bit less likely with monthly donations to the EA long-term future fund, which funds AI safety research and other X-risk mitigation work.