This only works if you're the only bookmaker in town. Even if your potential counterparties place their own subjective odds at 1:7, they won't book action with you at 1:7 if they can get 1:5 somewhere else.
Perhaps I misread OP's motivations, but presumably if you're looking to make money on these kinds of forecasts, you'd just trade stocks. Sure, you can't trade OpenAI per se, but there are lot of closely related assets and then you're not stuck in the position of trying to collect on a bet you made with a stranger over the internet.
So, the function of offering such a "bet" is more as a signaling device about your beliefs. In which case, the signal being sent here is not really a bearish one.
If you think there's a 40% chance of a crash, then that's quite the vig you're allocating yourself on this bet at 1:7.
These are very poor odds, to the point that they seem to indicate a bullish rather than a bearish position on AI.
There's definitely a better than 1 in 7 chance of a general market crash in the next year, given tariffs and recession risk (or, if you define crash loosely, we've already had one). Given that broader macro risk, merely 1 in 7 of an AI crash probably implies a forecast that AI will outperform the broader market.
If, for whatever reason, one is willing to disregard the macro risk, then there's a lot more upside in just buying QQQ than taking your bet.
There's a kind of paradox in all of these "straight line" extrapolation arguments for AI progress as your timelines assume (e.g., the argument for superhuman coding agents based on the rate of progress in the METR report).
One could extrapolate many different straight lines on graphs in the world right now (GDP, scientific progress, energy consumption, etc.). If we do create transformative AI within the next few years, then all of those straight lines will suddenly hit an inflection point. So, to believe in the straight line extrapolation of the AI line, you must also believe that almost no other straight lines will stay that way.
This seems to be the gut-level disagreement between those who feel the AGI and those who don't; the disbelievers don't buy that the AI line is straight and thus all the others aren't.
I don't know who's right and who's wrong in this debate, but the method of reasoning here reminds me of the viral tweet: "My 3-month-old son is now TWICE as big as when he was born.
He's on track to weigh 7.5 trillion pounds by age 10." It could be true, but I have a fairly strong prior from nearly every other context that growth/progress tends to bend down into an S-curve at one point or another, and so these forecasts seems deeply suspect to me unless there's some kind of better reason to suspect that trends will continue along the same path.
So, I certainly wouldn't expect the AI companies to capture all the value; you're right that competition drives the profits down. But, I also don't think it's reasonable to expect profits to get competed down to zero. Innovations in IT are generally pretty easy to replicate, technically speaking, but tech companies operate at remarkably high margins. Even at the moment, your various LLMs are similar but are not exact substitutes for one another, which gives each some market power.
Yea, fair enough. His prediction was: "I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code"
The second one is more hedged ("may be a world") but "essentially all the code" must translate to a very large fraction of all the value even if that last 1% or whatever is of outsize economic significance.
The original statement is:
"I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code"
So, as I read that he's not hedging on 90% in 3 to 6 months, but he is hedging on "essentially all" (99% or whatever that means) in a year.
I don't doubt they need capital. And the Nigerian prince who needs $5,000 to claim the $100 million inheritance does too. It's the fact that he/they can't get capital at something coming anywhere close to the claimed value that's suspicious.
Amodei is forecasting AI that writes 90% of code in three to six months according to this recent comments. Is Anthropic really burning cash so fast that they can't wait a quarter, demonstrate to investors that AI has essentially solved software, and then raise at 10x the valuation?
If AI executives really are as bullish as they say they are on progress, then why are they willing to raise money anywhere in the ballpark of current valuations?
Dario Amodei suggested the other day that AI will take over all or nearly all coding working within months. Given that software is a multi-trillion dollar industry, how can you possibly square that statement with agreeing to raise money at a valuation for Anthropic in the mere tens of billions? And that's setting aside any other value whatsoever for AI.
The whole thing sort of reminds me of the Nigerian prince scam (i.e., the Nigerian prince is coming into an inheritance of tens of millions of dollars but desperately needs a few thousand bucks to claim it, and will cut you in for incredible profit as a result) just scaled up a few orders of magnitude. Anthropic/OpenAI are on the cusp of technologies worth many trillions of dollars, but they're so desperate for a couple billion bucks to get there that they'll sell off big equity stakes at valuations that do not remotely reflect that supposedly certain future value.
It's possible for something to be a useful shorthand even if the underlying facts are dubious (e.g., the "let them eat cake" line doesn't come from Marie Antoinette but nonetheless illuminates the situation at the time; frogs will jump out of water if you heat it gradually but this stands in for a useful concept).
I'm not an expert-level Go player but my general sense is that Move 37 is in this same category. It was a surprising move, but it had a limited impact on the match and was not an optimal move as scored by stronger contemporary Go engines (thought it was a very good one). It didn't shift the probability of victory, and Sedol's move 38 was the optimal response to it as scored by Katago. It seems to have had a psychological effect because it was so surprising, but that's possible even if a move is literally random (as famously happened with Kasparov and Deep Blue).
You can donwload Katago and work through this yourself.