Matthew Barnett

Just someone who wants to learn about the world. I think about AI risk sometimes, but I still have a lot to learn.

Sequences

Daily Insights

Comments

Forecasting Thread: AI Timelines
  • Your percentiles:
    • 5th: 2040-10-01
    • 25th: above 2100-01-01
    • 50th: above 2100-01-01
    • 75th: above 2100-01-01
    • 95th: above 2100-01-01

XD

Forecasting Thread: AI Timelines

If AGI is taken to mean, the first year that there is radical economic, technological, or scientific progress, then these are my AGI timelines.

My percentiles

  • 5th: 2029-09-09
  • 25th: 2049-01-17
  • 50th: 2079-01-24
  • 75th: above 2100-01-01
  • 95th: above 2100-01-01

I have a bit lower probability for near-term AGI than many people here are. I model my biggest disagreement as about how much work is required to move from high-cost impressive demos to real economic performance. I also have an intuition that it is really hard to automate everything and progress will be bottlenecked by the tasks that are essential but very hard to automate.

Reflections on AI Timelines Forecasting Thread

Here, Metaculus predicts when transformative economic growth will occur. Current status:

25% chance before 2058.

50% chance before 2093.

75% chance before 2165.

My guide to lifelogging
Other pros of some body cams: goes underwater without a casing blocking the mic (I think)

I haven't tried it, but I don't think it can go underwater. It is built to be water resistant but I'm not confident it can be completely submerged. Therefore, if you are a frequent snorkeler, I recommend getting an action camera.

Forecasting Thread: AI Timelines

It's unclear to me what "human-level AGI" is, and it's also unclear to me why the prediction is about the moment an AGI is turned on somewhere. From my perspective, the important thing about artificial intelligence is that it will accelerate technological, economic, and scientific progress. So, the more important thing to predict is something like, "When will real economic growth rates reach at least 30% worldwide?"

It's worth comparing the vagueness in this question with the specificity in this one on Metaculus. From the virtues of rationality,

The tenth virtue is precision. One comes and says: The quantity is between 1 and 100. Another says: the quantity is between 40 and 50. If the quantity is 42 they are both correct, but the second prediction was more useful and exposed itself to a stricter test. What is true of one apple may not be true of another apple; thus more can be said about a single apple than about all the apples in the world. The narrowest statements slice deepest, the cutting edge of the blade.
What specific dangers arise when asking GPT-N to write an Alignment Forum post?
To me the most obvious risk (which I don't ATM think of as very likely for the next few iterations, or possibly ever, since the training is myopic/SL) would be that GPT-N in fact is computing (e.g. among other things) a superintelligent mesa-optimization process that understands the situation it is in and is agent-y.

Do you have any idea of what the mesa objective might be. I agree that this is a worrisome risk, but I was more interested in the type of answer that specified, "Here's a plausible mesa objective given the incentives." Mesa optimization is a more general risk that isn't specific to the narrow training scheme used by GPT-N.

Six economics misconceptions of mine which I've resolved over the last few years
It’s embarrassing that I was confidently wrong about my understanding of so many things in the same domain. I’ve updated towards thinking that microeconomics is trickier than most other similarly simple-seeming subjects like physics, math, or computer science. I think that the above misconceptions are more serious than any misconceptions about other technical fields which I’ve discovered over the last few years

For some of these, I'm confused about your conviction that you were "confidently wrong" before. It seems that the general pattern here is that you used the Econ 101 model to interpret a situation, and then later discovered that there was a more complex model that provided different implications. But isn't it kind of obvious that for something in the social sciences, there's always going to be some sort of more complex model that gives slightly different predictions?

When I say that a basic model is wrong, I mean that it gives fundamentally incorrect predictions, and that a model of similar complexity would provide better ones. However (at least minimally in the cases of (3) and (4)) I'm not sure I'd really describe your previous models as "wrong" in this sense. And I think there's a meaningful distinction between saying you were wrong and saying you gained a more nuanced understanding of something.

Modelling Continuous Progress
Second, the major disagreement is between those who think progress will be discontinuous and sudden (such as Eliezer Yudkowsky, MIRI) and those who think progress will be very fast by normal historical standards but continuous (Paul Chrisiano, Robin Hanson).

I'm not actually convinced this is a fair summary of the disagreement. As I explained in my post about different AI takeoffs, I had the impression that the primary disagreement between the two groups was over locality rather than the amount of time takeoff lasts. Though of course, I may be misinterpreting people.

Possible takeaways from the coronavirus pandemic for slow AI takeoff

I tend to think that the pandemic shares more properties with fast takeoff than it does with slow takeoff. Under fast takeoff, a very powerful system will spring into existence after a long period of AI being otherwise irrelevant, in a similar way to how the virus was dormant until early this year. The defining feature of slow takeoff, by contrast, is a gradual increase in abilities from AI systems all across the world.

In particular, I object to this portion of your post,

The "moving goalposts" effect, where new advances in AI are dismissed as not real AI, could continue indefinitely as increasingly advanced AI systems are deployed. I would expect the "no fire alarm" hypothesis to hold in the slow takeoff scenario - there may not be a consensus on the importance of general AI until it arrives, so risks from advanced AI would continue to be seen as "overblown" until it is too late to address them.

I'm not convinced that these parallels to COVID-19 are very informative. Compared to this pandemic, I expect the direct effects of AI to be very obvious to observers, in a similar way that the direct effects of cars are obvious to people who go outside. Under a slow takeoff, AI will already be performing a lot of important economic labor before the world "goes crazy" in the important senses. Compare to the pandemic, in which

  • It is not empirically obvious that it's worse than a seasonal flu (we only know that it is due to careful data analysis after months of collection).
  • It's not clearly affecting everyone around you in the way that cars, computers, software, and other forms of engineering are.
  • Is considered natural, and primarily affects old people who are conventionally considered to be less worthy of concern (though people give lip service denying this).
Is AI safety research less parallelizable than AI research?

For an alternative view, you may find this response interesting from an 80000 hours podcast. Here, Paul Christiano appears to reject that AI safety research is less parallelizable.

Robert Wiblin: I guess there’s this open question of whether we should be happy if AI progress across the board just goes faster. What if yes, we can just speed up the whole thing by 20%. Both all of the safety and capabilities. As far as I understand there’s kind of no consensus on this. People vary quite a bit on how pleased they’d be to see everything speed up in proportion.
Paul Christiano: Yes. I think that’s right. I think my take which is a reasonably common take, is it doesn’t matter that much from an alignment perspective. Mostly, it will just accelerate the time at which everything happens and there’s some second-order terms that are really hard to reason about like, “How good is it to have more computing hardware available?” Or ”How good is it for there to be more or less kinds of other political change happening in the world prior to the development of powerful AI systems?”
There’s these higher order questions where people are very uncertain of whether that’s good or bad but I guess my take would be the net effect there is kind of small and the main thing is I think accelerating AI matters much more on the like next 100 years perspective. If you care about welfare of people and animals over the next 100 years, then acceleration of AI looks reasonably good.
I think that’s like the main upside. The main upside of faster AI progress is that people are going to be happy over the short term. I think if we care about the long term, it is roughly awash and people could debate whether it’s slightly positive or slightly negative and mostly it’s just accelerating where we’re going.
Load More