Buck

Buck's Comments

What will be the big-picture implications of the coronavirus, assuming it eventually infects >10% of the world?

I've now made a Guesstimate here. I suspect that it is very bad and dumb; please make your own that is better than mine. I'm probably not going to fix problems with mine. Some people like Daniel Filan are confused by what my model means; I am like 50-50 on whether my model is really dumb or just confusing to read.

Also don't understand this part. "4x as many mild cases as severe cases" is compatible with what I assumed (10%-20% of all cases end up severe or critical) but where does 3% come from?

Yeah my text was wrong here; I meant that I think you get 4x as many unnoticed infections as confirmed infections, then 10-20% of confirmed cases end up severe or critical.

What will be the big-picture implications of the coronavirus, assuming it eventually infects >10% of the world?

Oh yeah I'm totally wrong there. I don't have time to correct this now. Some helpful onlooker should make a Guesstimate for all this.

What will be the big-picture implications of the coronavirus, assuming it eventually infects >10% of the world?

Epistemic status: I don't really know what I'm talking about. I am not at all an expert here (though I have been talking to some of my more expert friends about this).

EDIT: I now have a Guesstimate model here, but its results don't really make sense. I encourage others to make their own.

Here's my model: To get such a large death toll, there would need to be lots of people who need oxygen all at once and who can't get it. So we need to multiply the proportion of people who might have be infected all at once by the fatality rate for such people. I'm going to use point estimates here and note that they look way lower than yours; this should probably be a Guesstimate model.

Fatality rate

This comment suggests maybe 85% fatality of confirmed cases if they don't have a ventilator, and 75% without oxygen. EDIT: This is totally wrong, see replies. I will fix it later. Idk what it does to the bottom line.

But there are plausibly way more mild cases than confirmed cases. In places with aggressive testing, like Diamond Princess and South Korea, you see much lower fatality rates, which suggests that lots of cases are mild and therefore don't get confirmed. So plausibly there are 4x as many mild cases as confirmed cases. This gets us to like 3% fatality rate (again assuming no supplemental oxygen, which I don't think is clear and I expect someone else to be able to make progress on forecasting if they want).

How many people get it at once

(If we assume that like 1000 people in the US currently have it, and doubling time is 5 days, then peak time is like 3 months away.)

To get to overall 2.5% fatality, you need more than 80% of living humans to get it, in a big clump such that they don't have oxygen access. This probably won't happen (20%), because of arguments like the following:

  • This doesn't seem to have happened in China, so it seems possible to prevent.
    • China is probably unusually good at handling this, but even if only China does this
  • Flu is spread out over a few months, and it's more transmissible than this, and not everyone gets it. (Maybe it's because of immunity to flu from previous flus?)
  • If the fatality rate looks on the high end, people will try harder to not get it

Other factors that discount it

  • The warm weather might make it get a lot less bad. (10% hail mary?)
  • Effective countermeasures might be invented in the next few months. Eg we might need to notice that some existing antiviral is helpful. People are testing a bunch of these, and there are some that might be effective. (20% hail mary?)

Conclusion

This overall adds up to like 20% * (1-0.1-0.2) = 14% chance of 2.5% mortality, based on multiplications of point estimates which I'm sure are invalid.

What will be the big-picture implications of the coronavirus, assuming it eventually infects >10% of the world?

Just for the record, I think that this estimate is pretty high and I'd be pretty surprised if it were true; I've talked to a few biosecurity friends about this and they thought it was too high. I'm worried that this answer has been highly upvoted but there are lots of people who think it's wrong. I'd be excited for more commenters giving their bottom line predictions about this, so that it's easier to see the spread.

Wei_Dai, are you open to betting about this? It seems really important for us to have well-calibrated beliefs about this.

AIRCS Workshop: How I failed to be recruited at MIRI.

(I'm unsure whether I should write this comment referring to the author of this post in second or third person; I think I'm going to go with third person, though it feels a bit awkward. Arthur reviewed this comment before I posted it.)

Here are a couple of clarifications about things in this post, which might be relevant for people who are using it to learn about the MIRI recruiting process. Note that I'm the MIRI recruiter Arthur describes working with.

General comments:

I think Arthur is a really smart, good programmer. Arthur doesn't have as much background with AI safety stuff as many people who I consider as candidates for MIRI work, but it seemed worth spending effort on bringing Arthur to AIRCS etc because it would be really cool if it worked out.

Arthur reports a variety of people in this post as saying things that I think are somewhat misinterpreted, and I disagree with several of the things he describes them as saying.

I still don't understand that: what's the point of inviting me if the test fails ? It would appear more cost efficient to wait until after the test to decide whether they want me to come or not (I don't think I ever asked it out loud, I was already happy to have a trip to California for free).

I thought it was very likely Arthur would do well on the two-day project (he did).

I do not wish to disclose how much I have been paid, but I'll state that two hours at that rate was more than a day at the French PhD rate. I didn't even ask to be paid; I hadn't even thought that being paid for a job interview was possible.

It's considered good practice to pay people to do work for trials; we paid Arthur a rate which is lower than you'd pay a Bay Area software engineer as a contractor, and I was getting Arthur to do somewhat unusually difficult (though unusually interesting) work.

I assume that if EA cares about animal suffering in itself, then using throwaways is less of a direct suffering factor.

Yep

So Anna Salamon gave us a rule: We don't speak of AI safety to people who do not express the desire to hear about it. When I asked for more informations, she specified that it is okay to mention the words "AI Safety"; but not to give any details until the other person is sure they want to hear about it. In practice, this means it is okay to share a book/post on AI safety, but we should warn the person to read it only if they feel ready. Which leads to a related problem: some people never experienced an existential crisis or anxiety attack of their life, so it's all too possible they can't really "be ready".

I think this is a substantial misunderstanding of what Anna said. I don't think she was trying to propose a rule that people should follow, and she definitely wasn't explaining a rule of the AIRCS workshop or something; I think she was doing something a lot more like talking about something she thought about how people should relate to AI risk. I might come back and edit this comment later to say more.

That means that, during circles, I was asked to be as honest as possible about my feelings while also being considered for an internship. This is extremely awkward.

For the record, I think that "being asked to be as honest as possible" is a pretty bad description of what circling is, though I'm sad that it came across this way to Arthur (I've already talked to him about this)

But just because they do not think of AIRCS as a job interview does not mean AIRCS is not a job interview. Case in point: half a week after the workshop, the recruiter told me that "After discussing some more, we decided that we don't want to move forward with you right now". So the workshop really was what led them to decide not to hire me.

For the record, the workshop indeed made the difference about whether we wanted to make Arthur an offer right then. I think this is totally reasonable--Arthur is a smart guy, but not that involved with the AI safety community; my best guess before the AIRCS workshop was that he wouldn't be a good fit at MIRI immediately because of his insufficient background in AI safety, and then at the AIRCS workshop I felt like it turned out that this guess was right and the gamble hadn't paid off (though I told Arthur, truthfully, that I hoped he'd keep in contact).

During a trip to the beach, I finally had the courage to tell the recruiter that AIRCS is quite complex to navigate for me, when it's both a CFAR workshop and a job interview.

:( This is indeed awkward and I wish I knew how to do it better. My main strategy is to be as upfront and accurate with people as I can; AFAICT, my level of transparency with applicants is quite unusual. This often isn't sufficient to make everything okay.

First: they could mention people coming to AIRCS for a future job interview that some things will be awkward for them; but that they have the same workshop as everyone else so they'll have to deal with it.

I think I do mention this (and am somewhat surprised that it was a surprise for Arthur)

Furthermore, I do understand why it's generally a bad idea to tell unknown people in your buildings that they won't have the job.

I wasn't worried about Arthur destroying the AIRCS venue; I needed to confer with my coworkers before making a decision.

I do not believe that my first advice will be listened to. During a discussion, the last night near the fire, the recruiter was discussing with some other miri staff and participants. And at some point they mentioned MIRI's recruiting process. I think that they were mentioning that they loved recruiting because it leads them to work with extremely interesting people, but that it's hard to find them. Given that my goal was explicitly to be recruited, and that I didn't have any answers yet, it was extremely awkward for me. I can't state explicitly why, after all I didn't have to add anything to their remark. But even if I can't explain why I think that, I still firmly believe that it's the kind of things a recruiter should avoid saying near their potential hire.

I don't quite understand what Arthur's complaint is here, though I agree that it's awkward having people be at events with people who are considering hiring them.

Miri here is an exception. I can see only so many reasons not to hire me that the outcome was unsurprising. The process and they considering me in the first place was.

Arthur is really smart and it seemed worth getting him more involved in all this stuff.

We run the Center for Applied Rationality, AMA

For the record, parts of that ratanon post seem extremely inaccurate to me; for example, the claim that MIRI people are deferring to Dario Amodei on timelines is not even remotely reasonable. So I wouldn't take it that seriously.

Let's talk about "Convergent Rationality"
Buck6mo12Ω6

In OpenAI's Roboschool blog post:

This policy itself is still a multilayer perceptron, which has no internal state, so we believe that in some cases the agent uses its arms to store information.

Buck's Shortform

formatting problem, now fixed

Aligning a toy model of optimization

Given a policy π we can directly search for an input on which it behaves a certain way.

(I'm sure this point is obvious to Paul, but it wasn't to me)

We can search for inputs on which a policy behaves badly, which is really helpful for verifying the worst case of a certain policy. But we can't search for a policy which has a good worst case, because that would require using the black box inside the function passed to the black box, which we can't do. I think you can also say this as "the black box is an NP oracle, not a oracle".

This still means that we can build a system which in the worst case does nothing, rather than in the worst case is dangerous: we do whatever thing to get some policy, then we search for an input on which it behaves badly, and if one exists we don't run the policy.

Robustness to Scale

I think that the terms introduced by this post are great and I use them all the time

Load More