Andy Jones

Posts

Sorted by New

Comments

How can I bet on short timelines?

I don't think the problem you're running into is a problem with making bets, it's a problem with leverage.

Heck, you've already figured out how to place a bet that'll pay off in future, but pay you money now: a loan. Combined with either the implicit bet on the end of the world freeing you from repayment, or an explicit one with a more-AI-skeptical colleague, this gets you your way of betting on AI risk that pays now.

Where it falls short is that most loanmaking organisations will at most offer you slightly more than the collateral you can put up. Because, well, to most loanmaking organisations you're just a big watery bag of counterparty risk, and if they loan you substantially more than your net worth they're very unlikely to get it back - even if you lose your bet! 

But this is a problem people have run into before! Every day there are organisations who want to get lots more cash than they can put up in collateral in order to make risky investments that might not pay off. Those organisations sell shares. Shares entitle the buyer to a fraction of the uncertain future revenues, and it's that upside risk - the potential for the funder to make a lot more money than was put in - that separates them from loans.

Now as an individual you're cut off from stock markets. The closest approximation available is venture capital. That gives you almost everything you want, except that it requires you come up with a way to monetise your beliefs. 

The other path is to pay your funders in expected better-worlds, and that takes you to the door of charitable funding. Here I'm thinking both of places like the LTFF and SAF, and more generally of HNW funders themselves. The former is pretty accessible, but limited in its capacity. The latter is less accessible, but with much greater capacity. In both cases they expect more than just a bet thesis; they require a plan to actually pay them back some better-worlds!

It's worth noting that if you actually have a plan - even a vague one! - for reducing the risk from short AI timelines, then you shouldn't have much trouble getting some expenses out of LTFF/SAF/etc to explore it. They're pretty generous. If you can't convince them of your plan's value - then in all honesty your plan likely needs more work. If you can convince them, it's a solid path to substantially more direct funding. 

But those, I think, are the only possible solutions to your issue. They all have some sort of barrier to entry, but that's necessary because from the outside you're indistinguishable from any other gambler!

About a local variation of Rock-Paper-Scissors and how it self-negated its own problematic dynamic

You may be interested in alpha-rank. It's an Elo-esque system for highly 'nontransitive' games - ie, games where there're lots of rock-paper-scissors-esque cycles. 

At a high level, what it does is set up a graph like the one you've drawn, then places a token on a random node and repeatedly follows the 'defeated-by' edges. The amount of time spent on a node gives the strength of the strategy.

You might also be interested in rectified policy space response oracles, which is one approach to finding new, better strategies in nontransitive games.

Draft report on AI timelines

This is superb, and I think it'll have a substantial impact on debate going work. Great work!

  • Short-term willingness to spend is something I've been thinking a lot about recently. My beliefs about expansion rates are strangely bimodal:
    • If AI services are easy to turn into monopolies - if they have strong moats - then the growth rate should be extraordinary as legacy labour is displaced and the revenues are re-invested into improving the AI. In this case, blowing through $1bn/run seems plausible.
    • If AI services are easy to commodify - weak or no moats - then the growth rate should stall pretty badly. We'll end up with many, many small AI systems with lots of replicated effort, rather than one big one. In this case, investment growth could stall out in the very near future. The first $100m run that fails to turn a profit could be the end of the road.
  • I used to be heavily biased towards the former scenario, but recent evidence from the nascent AI industry has started to sway me.
  • One outside view is that AI services are just one more mundane technology, and we should see a growth curve much like the tech industry's so far.
    • A slightly-more-inside-view is that they're just one more mundane cloud technology, and we should see a growth curve that looks like AWS's.
  • A key piece of evidence will be how much profit OpenAI turns on GPT. If Google and Facebook come out with substitute products in short order and language modelling gets commodified down to zero profits, that'll sway me to the latter scenario. I'm not sure how to interpret the surprisingly high price of the OpenAI API in this context.
  • Another thing which has been bugging me - but I haven't put much thought into yet - is how to handle the inevitable transition from 'training models from scratch' to 'training as an ongoing effort'. I'm not sure how this changes the investment dynamics.
Does crime explain the exceptional US incarceration rate?

You can get a complementary analysis by comparing the US to its past self. Incarceration rate, homicide rate. Between 1975 and 2000, the incarceration rate grew five-fold while the homicide rate fell by half.

The Era Of Unlimited Everything: Unlimited Materials & Unlimited Money

Bit of a tangent, but while we might plausibly run out of cheap oil in the near future, the supply of expensive, unconventional oil is vast. By vast I mean 'several trillion barrels of known reserves', against an annual consumption of 30bn.

Question is just how much of those reserves are accessible at each price point. This is really hard to answer well, so instead here's an anecdote that'll stick in your head: recent prices ($50-$100/bbl) are sufficient that the US is now the largest producer of oil in the world, and a net exporter to boot.

For what it's worth, this whole unconventional oil thing has appeared from nowhere the last ten years, and it's been a shock to a lot of people.

Are we in an AI overhang?

Thanks for the feedback! I've cleaned up the constraints section a bit, though it's still less coherent than the first section.

Out of curiosity, what was it that convinced you this isn't an infohazard-like risk?

Are we in an AI overhang?

While you're here and chatting about D.5 (assume you meant 5), another tiny thing that confuses me - Figure 21. Am I right in reading the bottom two lines as 'seeing 255 tokens and predicting the 256th is exactly as difficult as seeing 1023 tokens and predicting the 1024th'?

e: Another look and I realise Fig 20 shows things much more clearly - never mind, things continue to get easier with token index.

Are we in an AI overhang?
Though it's not mentioned in the paper, I feel like this could be because the scaling analysis was done on 1024-token sequences. Maybe longer sequences can go further.

It's indeed strange no-one else has picked up on this, which makes me feel I'm misunderstanding something. The breakdown suggested in the scaling law does imply that this specific architecture doesn't have much further to go. Whether the limitation is in something as fundamental as 'the information content of language itself', or if it's a more-easily bypassed 'the information content of 1024-token strings', is unclear.

My instinct is for the latter, though again by the way no-one else has mentioned it - even the paper authors - I get the uncomfortable feeling I'm misunderstanding something. That said, being able to write that quote a few days ago and since have no-one pull me up on it has increased my confidence that it's a viable interpretation.

Are we in an AI overhang?

'Why the hell has our competitor got this transformative capability that we don't?' is not a hard thought to have, especially among tech executives. I would be very surprised if there wasn't a running battle over long-term perspectives on AI in the C-suite of both Google Brain and DeepMind.

If you do want to think along these lines though, the bigger question for me is why OpenAI released the API now, and gave concrete warning of the transformative capabilities they intend to deploy in six? twelve? months' time. 'Why the hell has our competitor got this transformative capability that we don't?' is not a hard thought now, but it that's largely because the API was a piece of compelling evidence thrust in all of our faces.

Maybe they didn't expect it to latch into the dev-community consciousness like it has, or for it to be quite as compelling a piece of evidence as it's turned out to be. Maybe it just seemed like a cool thing to do and in-line with their culture. Maybe it's an investor demo for how things will be monetised in future, which will enable the $10bn punt they need to keep abreast of Google.

Are we in an AI overhang?

hey man wanna watch this language model drive my car

Load More