Andy Jones

Wiki Contributions

Comments

AI Safety Needs Great Engineers

It's not impossible, but it appears unlikely for the foreseeable future.  We do sponsor visas, but if that doesn't suit then I'd take a look at Cohere.ai, as they're one org I know of with a safety team who are fully-onboard with remote. 

AI Safety Needs Great Engineers

You're presenting it as a litmus test for engineers to apply to themselves, and that's fine as far as it goes

I can reassure you that it is in fact a litmus test for engineers to apply to themselves, and that's as far as it goes.

While part of me is keen to discuss our interview design further, I'm afraid you've done a great job of laying out some of the reasons not to!

Analysis of World Records in Speedrunning [LINKPOST]

I stuck this on Twitter already, but normalised these shake out to a consistent-ish set of curves
 

code
 

What made the UK COVID-19 case count drop?

I've been surprised by this too, and my best explanation so far is schools. Evidence in favour is that Scottish school holidays start end-of-June, while English school holidays start middle-of-July, and indeed there looks to be a two-week difference in the peaks for the two nations.

A good test for this will be this week's ONS report. This doesn't have the English turn-around in it yet, but if it is schools then there should be an extremely sharp drop in the school-age rates.

All that said, it's only my best hypothesis. A strong piece of evidence against it is that we haven't seen the same effect in the US, where school holidays started a while ago.

Covid 7/1: Don’t Panic

One piece of evidence against this: almost all the uptick in the UK is in folks under 40. Under 40s have a much lower vaccination rate due to the age-dependent rollout, but because of the blood clot scare under 40s have preferentially gotten Pfizer. Over 40s meanwhile have a very high vaccination rate but it's mostly AstraZeneca. Their case rate is flat.

Source
 

Are we in an AI overhang?

Nine months later I consider my post pretty 'shrill', for want of a better adjective. I regret not making more concrete predictions at the time, because yeah, reality has substantially undershot my fears. I think there's still a substantial chance of something 10x large being revealed within 18 months (which I think is the upper bound on 'timeline measured in months'), but it looks very unlikely that there'll be a 100x increase in that time frame. 

To pick one factor I got wrong in writing the above, it was thinking of my massive update in response to GPT-3 as somewhere near to the median, rather than a substantial outlier.  As another example of this, I am the only person I know of who, after GPT-3, dropped everything they were doing to re-orient their career towards AI safety.  And that's within circles of people who you'd think would be primed to respond similarly!

I still think AI projects could be run at vastly larger budgets, so in that sense I still believe in there being an orders-of-magnitude overhang. Just convincing the people with those budgets to fund these projects is apparently much harder than I thought.

I am not unhappy about this.

How can I bet on short timelines?

I don't think the problem you're running into is a problem with making bets, it's a problem with leverage.

Heck, you've already figured out how to place a bet that'll pay off in future, but pay you money now: a loan. Combined with either the implicit bet on the end of the world freeing you from repayment, or an explicit one with a more-AI-skeptical colleague, this gets you your way of betting on AI risk that pays now.

Where it falls short is that most loanmaking organisations will at most offer you slightly more than the collateral you can put up. Because, well, to most loanmaking organisations you're just a big watery bag of counterparty risk, and if they loan you substantially more than your net worth they're very unlikely to get it back - even if you lose your bet! 

But this is a problem people have run into before! Every day there are organisations who want to get lots more cash than they can put up in collateral in order to make risky investments that might not pay off. Those organisations sell shares. Shares entitle the buyer to a fraction of the uncertain future revenues, and it's that upside risk - the potential for the funder to make a lot more money than was put in - that separates them from loans.

Now as an individual you're cut off from stock markets. The closest approximation available is venture capital. That gives you almost everything you want, except that it requires you come up with a way to monetise your beliefs. 

The other path is to pay your funders in expected better-worlds, and that takes you to the door of charitable funding. Here I'm thinking both of places like the LTFF and SAF, and more generally of HNW funders themselves. The former is pretty accessible, but limited in its capacity. The latter is less accessible, but with much greater capacity. In both cases they expect more than just a bet thesis; they require a plan to actually pay them back some better-worlds!

It's worth noting that if you actually have a plan - even a vague one! - for reducing the risk from short AI timelines, then you shouldn't have much trouble getting some expenses out of LTFF/SAF/etc to explore it. They're pretty generous. If you can't convince them of your plan's value - then in all honesty your plan likely needs more work. If you can convince them, it's a solid path to substantially more direct funding. 

But those, I think, are the only possible solutions to your issue. They all have some sort of barrier to entry, but that's necessary because from the outside you're indistinguishable from any other gambler!

About a local variation of Rock-Paper-Scissors and how it self-negated its own problematic dynamic

You may be interested in alpha-rank. It's an Elo-esque system for highly 'nontransitive' games - ie, games where there're lots of rock-paper-scissors-esque cycles. 

At a high level, what it does is set up a graph like the one you've drawn, then places a token on a random node and repeatedly follows the 'defeated-by' edges. The amount of time spent on a node gives the strength of the strategy.

You might also be interested in rectified policy space response oracles, which is one approach to finding new, better strategies in nontransitive games.

Draft report on AI timelines

This is superb, and I think it'll have a substantial impact on debate going work. Great work!

  • Short-term willingness to spend is something I've been thinking a lot about recently. My beliefs about expansion rates are strangely bimodal:
    • If AI services are easy to turn into monopolies - if they have strong moats - then the growth rate should be extraordinary as legacy labour is displaced and the revenues are re-invested into improving the AI. In this case, blowing through $1bn/run seems plausible.
    • If AI services are easy to commodify - weak or no moats - then the growth rate should stall pretty badly. We'll end up with many, many small AI systems with lots of replicated effort, rather than one big one. In this case, investment growth could stall out in the very near future. The first $100m run that fails to turn a profit could be the end of the road.
  • I used to be heavily biased towards the former scenario, but recent evidence from the nascent AI industry has started to sway me.
  • One outside view is that AI services are just one more mundane technology, and we should see a growth curve much like the tech industry's so far.
    • A slightly-more-inside-view is that they're just one more mundane cloud technology, and we should see a growth curve that looks like AWS's.
  • A key piece of evidence will be how much profit OpenAI turns on GPT. If Google and Facebook come out with substitute products in short order and language modelling gets commodified down to zero profits, that'll sway me to the latter scenario. I'm not sure how to interpret the surprisingly high price of the OpenAI API in this context.
  • Another thing which has been bugging me - but I haven't put much thought into yet - is how to handle the inevitable transition from 'training models from scratch' to 'training as an ongoing effort'. I'm not sure how this changes the investment dynamics.
Does crime explain the exceptional US incarceration rate?

You can get a complementary analysis by comparing the US to its past self. Incarceration rate, homicide rate. Between 1975 and 2000, the incarceration rate grew five-fold while the homicide rate fell by half.

Load More