From Part 4 of the report:
Nonetheless, this cursory examination makes me believe that it’s fairly unlikely that my current estimates are off by several orders of magnitude. If the amount of computation required to train a transformative model were (say) ~10 OOM larger than my estimates, that would imply that current ML models should be nowhere near the abilities of even small insects such as fruit flies (whose brains are 100 times smaller than bee brains). On the other hand, if the amount of computation required to train a transformative model were ~10 OOM smaller than my estimate, our models should be as capable as primates or large birds (and transformative AI may well have been affordable for several years).
I'm not sure I totally follow why this should be true-- is this predicated on already assuming that the computation to train a neural network equivalent to a brain with N neurons scales in some particular way with respect to N?
So exciting that this is finally out!!!
I haven't gotten a chance to play with the models yet, but thought it might be worth noting the ways I would change the inputs (though I haven't thought about it very carefully):
I'm a bit confused about this as a piece of evidence-- naively, it seems to me like not carrying the 1 would be a mistake that you would make if you had memorized the pattern for single-digit arithmetic and were just repeating it across the number. I'm not sure if this counts as "memorizing a table" or not.
This recent post by OpenAI is trying to shed some light in this question: https://openai.com/blog/ai-and-efficiency/
I really like this post.
Self-driving cars are currently illegal, I assume largely because of these unresolved tail risks. But I think excluding illegality I'm not sure their economic value is zero-- I could imagine cases where people would use self-driving cars if they wouldn't be caught doing it. Does this seem right to people?
Intuitively it doesn't seem like economic value tails and risk tails should necessarily go together, which makes me concerned about cases similar to self-driving cars that are harder to regulate legally.
What's the corresponding story here for trading bots? Are they designed in a sufficiently high-assurance way that new tail problems don't come up, or do they not operate in the tails?
I rewrote the question-- I think I meant 'counterfactual' in that this isn't a super promising idea if in fact we are just taking medical supplies from one group of people and transferring them to another.
I don't know anything about maintenance/cleaning, was thinking it would be in particular useful if we straight up run out of ICU space-- i.e., there is no alternative to go to an ICU. (Maybe this is a super unlikely class of scenarios?)
You're totally not obligated to do this, but I think it might be cool if you generated a 3D picture of hills representing your loss function-- I think it would make the intuition for what's going on clearer.
We're not going to do this because we weren't planning on making these public when we conducted the conversations, so we want to give people a chance to make edits to transcripts before we send them out (which we can't do with audio).
The claim is a personal impression that I have from conversations, largely with people concerned about AI risk in the Bay Area. (I also don't like information cascades, and may edit the post to reflect this qualification.) I'd be interested in data on this.