I currently have something like 50% chance that the point of no return will happen by 2030. Moreover, it seems to me that there's a wager for short timelines, i.e. you should act as if short timelines scenarios are more likely than they really are, because you have more influence over them. I think that I am currently taking short timelines scenarios much more seriously than most people, even most people in the AI safety community. I suppose this is mostly due to having higher credence in them, but maybe there are other factors as well.

Anyhow, I wonder if there are ways for me to usefully bet on this difference.

Money is only valuable to me prior to the point of no return, so the value to me of a bet that pays off after that point is reached is approximately zero. In fact it's not just money that has this property. This means that no matter how good the odds are that you offer me, and even if you pay up front, I'm better off just taking out a low-interest loan instead. Besides, I don't need money right now anyway, at least to continue my research activities. I'd only be able to achieve significant amounts of extra good if I had quite a lot more money.

What do I need right now? I guess I need knowledge and help. I'd love to have a better sense of what the world will be like and what needs to be done to save it. And I'd love to have more people doing what needs to be done.

Can I buy these things with money? I don't think so... As the linked post argues, knowledge isn't something you can buy, in general. On some topics it is, but not all, and in particular not on the topic of what needs to be done to save the world. As for help, I've heard from various other people that hiring is net-negative unless the person you hire is both really capable and really aligned with your goals. But IDK.

There are plenty of people who are really capable and really aligned with my goals. Some of them are already helping, i.e. already doing what needs to be done. But most aren't. I think this is mostly because they disagree about what needs to be done, and I think that is largely because their timelines are longer than mine. So, maybe we can arrange some sort of bet... for example, maybe I could approach people who are capable and aligned but have longer timelines, and say: "If you agree to act as if my timelines are correct for the next five years, I'll act as if yours are correct thereafter."

Any suggestions?

New Answer
New Comment

5 Answers sorted by

If timelines are short, where does the remaining value live? Some fairly Babble-ish ideas:

  • Alignment-by-default
    • Both outer alignment and inner by default
      • With full alignment by default, there's nothing to do, I think! One could be an accelerationist, but the reduction in suffering and lives lost now doesn't seem large enough for the cost in probability of aligned AI
      • Possibly value could be lost if values aren't sufficiently cosmopolitan? One could try and promote cosmopolitan values
    • Inner alignment by default
      • Focus on tools for getting good estimates of human values, or an intent-aligned AI
        • Ought's work is a good example
        • Possibly trying to experiment with governance / elicitation structures, like quadratic voting
        • Also thinking about how to get good governance structures actually used
  • Acausal trade
    • In particular, expand the ideas in this post. (I understand Paul to be claiming he argues for tractability somewhere in that post, but I couldn't find it)
    • Work through the details of UDT games, and how we could effect proper acausal trade. Figure out how to get the relevant decision makers on board
  • Strong, fairly late institutional responses
    • Work on making, for example, states strong enough to (coordinately) restrict or stop AI development

Other things that seem useful:

  • Learn the current hot topics in ML. If timelines are short, it's probably the case that AGI will use extensions of the current frontier
  • Invest in leveraging AI tools for direct work / getting those things that money cannot buy. This maybe a little early, but if the takeoff is at all soft, maybe there are still >10 years left of 2020-level intellectual work before 2030 if you're using the right tools

Thanks! I found this the most helpful of the answers so far. I'd be interested to hear more about leveraging AI tools for direct work; can you say more?

Sure! I see people on twitter, for example, doing things like having GPT-3 provide autocomplete or suggestions while they're writing, or doing grunt work of producing web apps. Plausibly, figuring out how to get the most value out of future AI developments for improving productivity is important. There's an issue that it's not very obvious exactly how to prepare for various AI tools in the future. One piece of work could be thinking more about how to flexibly prepare for AI tools with unknown capabilities, or predicting what the capabilities will be. Other things that come to mind are: * Practice getting up to speed in new tool setups. If you are very bound to a setup that you like, you might have a hard time leveraging these advances as they come along. Alternatively, try and be sure you can extend your current workflow * Increase the attention you pay to new (AI) tools. Get used to trying them out, both for the reasons above and because it may be important to act fast in picking up very helpful new tools To be clear, it's not super clear to me how much value there is in this direction. It is pretty plausible to me that AI tooling will be essential for competitive future productivity, but maybe there's not much of an opportunity to bet on that

I don't think the problem you're running into is a problem with making bets, it's a problem with leverage.

Heck, you've already figured out how to place a bet that'll pay off in future, but pay you money now: a loan. Combined with either the implicit bet on the end of the world freeing you from repayment, or an explicit one with a more-AI-skeptical colleague, this gets you your way of betting on AI risk that pays now.

Where it falls short is that most loanmaking organisations will at most offer you slightly more than the collateral you can put up. Because, well, to most loanmaking organisations you're just a big watery bag of counterparty risk, and if they loan you substantially more than your net worth they're very unlikely to get it back - even if you lose your bet! 

But this is a problem people have run into before! Every day there are organisations who want to get lots more cash than they can put up in collateral in order to make risky investments that might not pay off. Those organisations sell shares. Shares entitle the buyer to a fraction of the uncertain future revenues, and it's that upside risk - the potential for the funder to make a lot more money than was put in - that separates them from loans.

Now as an individual you're cut off from stock markets. The closest approximation available is venture capital. That gives you almost everything you want, except that it requires you come up with a way to monetise your beliefs. 

The other path is to pay your funders in expected better-worlds, and that takes you to the door of charitable funding. Here I'm thinking both of places like the LTFF and SAF, and more generally of HNW funders themselves. The former is pretty accessible, but limited in its capacity. The latter is less accessible, but with much greater capacity. In both cases they expect more than just a bet thesis; they require a plan to actually pay them back some better-worlds!

It's worth noting that if you actually have a plan - even a vague one! - for reducing the risk from short AI timelines, then you shouldn't have much trouble getting some expenses out of LTFF/SAF/etc to explore it. They're pretty generous. If you can't convince them of your plan's value - then in all honesty your plan likely needs more work. If you can convince them, it's a solid path to substantially more direct funding. 

But those, I think, are the only possible solutions to your issue. They all have some sort of barrier to entry, but that's necessary because from the outside you're indistinguishable from any other gambler!

I roughly have similar beliefs and I've thought about the same question before.

The hope is that you could make more specific bets based on trends which are not currently clear to the world as a whole but will become apparent relatively soon. For example, I think I remember Gwern asking whether, if the scaling power of larger NNs continues, Nvidia will become the most valuable company in the world as the power of truly massive models/training volumes becomes apparent and they're in prime position to profit.

The problem is that shares on the frontier of AI developments are already subject to a lot of hype from somewhat similar beliefs (e.g. anyone who is a major blockchain believer, or a big AI believer but in a purely positive sense). These stocks are therefore already significantly overvalued by traditional metrics and it's not obvious whether NN progress is enough to generate major share price growth, at least with high enough probability to overcome the presumably very high discount rates that you have, even within the next 10 years (e.g. Nvidia market cap is $360B, so even becoming the largest company in the world only implies a ~6x price increase and it's hard to give this more than 15% credence in the next decade).

It seems that if you believe specifically in short timelines then there may be companies who are particularly likely to succeed given the importance of massive models (if indeed that's the way you expect things to play out). At the moment though, most of those in position to take advantage seem to either be embedded in larger companies (DeepMind, big tech AI divisions) or just not public (OpenAI, most startups). 

Ideally I guess there would be a venture capital fund which you could place money into which would invest in the most promising companies which themselves are betting on being in position to take commercial advantage of ML breakthroughs. I'm not sure I'm aware of any such fund but I'd certainly be interested if one exists/is being created.

Don't know of a venture capital fund like that, but there's apparently a "Rise of the Robots" ETF.

Agreed. Like I said though, money isn't even my main constraint, at least not small amounts of money. (If you have an idea for how I could make a million dollars or more, or otherwise 10x my savings, let me know!) Any ideas for how I could do a bet that pays off in knowledge or help?

A way to bet on shorter timeline is getting instant payment now and return 10x in 2035.  And you will use this money for shorter timeline research. For example, someone who believes in long timelines, gives you 100 USD now, and you return 1000 USD in 2035. 

I already did this, yeah. But as I pointed out in the OP, this is strictly inferior to just taking out a low-interest loan. I get 100 USD now and pay back, like, 130 USD in 2035.

Bet an asset instead of money.
Alternatively, you could bet that market odds will change significantly before then.

Concretely, you could use ETH denominated Augur for a long-term bet, or USDC for a short-term bet on odds.

If money isn't worth betting, why would assets be worth betting? Aren't assets the sorts of things you can buy with money? Maybe I don't know what you have in mind here.

I like the idea of betting the market odds will change. But if there's already a market for the thing I'm interested in, I can just play that market, e.g. buy stocks or calls of MSFT or something. But like I said, money isn't my main problem right now anyway.

I was imagining a utility function for fiat with a singular limit at t=15yr, such that any bet paying out fiat is worthless. Think hyperinflation caused by it being obvious that we are facing imminent doom. I don't see how stocks necessarily correlate with the prediction you're making.
4 comments, sorted by Click to highlight new comments since: Today at 2:59 AM

Money is only valuable to me prior to the point of no return

Because money is only useful to you for preventing us from reaching that point of no return?

What if it's a very slow burn after the point of no return? Presumably you'd still want to live your life and spend on yourself and loved ones (and even altruistically, on more short-term causes). No?

Nah. Once it's clear we are all doomed, I'll get by just fine probably--it's unlikely that I'll have spent literally all my money and social capital by then. And if I don't, it won't matter much anyway.