Buck

Comments

Buck's Shortform
Buck2mo16Ω7

I used to think that slower takeoff implied shorter timelines, because slow takeoff means that pre-AGI AI is more economically valuable, which means that economy advances faster, which means that we get AGI sooner. But there's a countervailing consideration, which is that in slow takeoff worlds, you can make arguments like ‘it’s unlikely that we’re close to AGI, because AI can’t do X yet’, where X might be ‘make a trillion dollars a year’ or ‘be as competent as a bee’. I now overall think that arguments for fast takeoff should update you towards shorter timelines.

So slow takeoffs cause shorter timelines, but are evidence for longer timelines.

This graph is a version of this argument: if we notice that current capabilities are at the level of the green line, then if we think we're on the fast takeoff curve we'll deduce we're much further ahead than we'd think on the slow takeoff curve.

For the "slow takeoffs mean shorter timelines" argument, see here: https://sideways-view.com/2018/02/24/takeoff-speeds/

This
point feels really obvious now that I've written it down, and I suspect it's obvious to many AI safety people, including the people whose writings I'm referencing here. Thanks to Caroline Ellison for pointing this out to me, and various other people for helpful comments.

I think that this is why belief in slow takeoffs is correlated with belief in long timelines among the people I know who think a lot about AI safety.

How good is humanity at coordination?

I don't really know how to think about anthropics, sadly.

But I think that it's pretty likely that nuclear war could have not killed everyone. So I still lose Bayes points compared to the world where nukes were fired but not everyone died.

$1000 bounty for OpenAI to show whether GPT3 was "deliberately" pretending to be stupider than it is
Buck2mo18Ω6
It's tempting to anthropomorphize GPT-3 as trying its hardest to make John smart. That's what we want GPT-3 to do, right?

I don't feel at all tempted to do that anthropomorphization, and I think it's weird that EY is acting as if this is a reasonable thing to do. Like, obviously GPT-3 is doing sequence prediction--that's what it was trained to do. Even if it turns out that GPT-3 correctly answers questions about balanced parens in some contexts, I feel pretty weird about calling that "deliberately pretending to be stupider than it is".

Possible takeaways from the coronavirus pandemic for slow AI takeoff

If the linked SSC article is about the aestivation hypothesis, see the rebuttal here.

Six economics misconceptions of mine which I've resolved over the last few years

Remember that I’m not interested in evidence here, this post is just about what the theoretical analysis says :)

In an economy where the relative wealth of rich and poor people is constant, poor people and rich people both have consumption equal to their income.

Six economics misconceptions of mine which I've resolved over the last few years

I agree that there's some subtlety here, but I don't think that all that happened here is that my model got more complex.

I think I'm trying to say something more like "I thought that I understood the first-order considerations, but actually I didn't." Or "I thought that I understood the solution to this particular problem, but actually that problem had a different solution than I thought it did". Eg in the situations of 1, 2, and 3, I had a picture in my head of some idealized market, and I had false beliefs about what happens in that idealized market, just like I'd be able to be wrong about the Nash equilibrium of a game.

I wouldn't have included something on this list if I had just added complexity to the model in order to capture higher-order effects.

Six economics misconceptions of mine which I've resolved over the last few years

I agree that the case where there are several equilibrium points that are almost as good for the employer is the case where the minimum wage looks best.

Re point 1, note that the minimum wage decreases total consumption, because it reduces efficiency.

What will be the big-picture implications of the coronavirus, assuming it eventually infects >10% of the world?

I've now made a Guesstimate here. I suspect that it is very bad and dumb; please make your own that is better than mine. I'm probably not going to fix problems with mine. Some people like Daniel Filan are confused by what my model means; I am like 50-50 on whether my model is really dumb or just confusing to read.

Also don't understand this part. "4x as many mild cases as severe cases" is compatible with what I assumed (10%-20% of all cases end up severe or critical) but where does 3% come from?

Yeah my text was wrong here; I meant that I think you get 4x as many unnoticed infections as confirmed infections, then 10-20% of confirmed cases end up severe or critical.

What will be the big-picture implications of the coronavirus, assuming it eventually infects >10% of the world?

Oh yeah I'm totally wrong there. I don't have time to correct this now. Some helpful onlooker should make a Guesstimate for all this.

What will be the big-picture implications of the coronavirus, assuming it eventually infects >10% of the world?

Epistemic status: I don't really know what I'm talking about. I am not at all an expert here (though I have been talking to some of my more expert friends about this).

EDIT: I now have a Guesstimate model here, but its results don't really make sense. I encourage others to make their own.

Here's my model: To get such a large death toll, there would need to be lots of people who need oxygen all at once and who can't get it. So we need to multiply the proportion of people who might have be infected all at once by the fatality rate for such people. I'm going to use point estimates here and note that they look way lower than yours; this should probably be a Guesstimate model.

Fatality rate

This comment suggests maybe 85% fatality of confirmed cases if they don't have a ventilator, and 75% without oxygen. EDIT: This is totally wrong, see replies. I will fix it later. Idk what it does to the bottom line.

But there are plausibly way more mild cases than confirmed cases. In places with aggressive testing, like Diamond Princess and South Korea, you see much lower fatality rates, which suggests that lots of cases are mild and therefore don't get confirmed. So plausibly there are 4x as many mild cases as confirmed cases. This gets us to like 3% fatality rate (again assuming no supplemental oxygen, which I don't think is clear and I expect someone else to be able to make progress on forecasting if they want).

How many people get it at once

(If we assume that like 1000 people in the US currently have it, and doubling time is 5 days, then peak time is like 3 months away.)

To get to overall 2.5% fatality, you need more than 80% of living humans to get it, in a big clump such that they don't have oxygen access. This probably won't happen (20%), because of arguments like the following:

  • This doesn't seem to have happened in China, so it seems possible to prevent.
    • China is probably unusually good at handling this, but even if only China does this
  • Flu is spread out over a few months, and it's more transmissible than this, and not everyone gets it. (Maybe it's because of immunity to flu from previous flus?)
  • If the fatality rate looks on the high end, people will try harder to not get it

Other factors that discount it

  • The warm weather might make it get a lot less bad. (10% hail mary?)
  • Effective countermeasures might be invented in the next few months. Eg we might need to notice that some existing antiviral is helpful. People are testing a bunch of these, and there are some that might be effective. (20% hail mary?)

Conclusion

This overall adds up to like 20% * (1-0.1-0.2) = 14% chance of 2.5% mortality, based on multiplications of point estimates which I'm sure are invalid.

Load More