SoerenMind

Comments

Will OpenAI's work unintentionally increase existential risks related to AI?

No. Amodei led the GPT-3 project, he's clearly not opposed to scaling things. Idk why they're leaving but since they're all starting a new thing together, I presume that's the reason.

New SARS-CoV-2 variant

Some expert commentary here:  https://www.sciencemag.org/news/2020/12/mutant-coronavirus-united-kingdom-sets-alarms-its-importance-remains-unclear

Noteworthy:

  • We previously thought a strain from Spain was spreading faster than the rest but it was just because og people returning from holiday in Spain.
  • Chance events can help a strain spread faster.
  • The UK (and Denmark) do more gene sequencing than other countries - that may explain why they picked up the new variant first.
  • The strain has acquired 17 mutations at once which is very high. Not clear what that means.
Continuing the takeoffs debate

For example, moving from a 90% chance to a 95% chance of copying a skill correctly doubles the expected length of any given transmission chain, allowing much faster cultural accumulation. This suggests that there’s a naturally abrupt increase in the usefulness of culture

This makes sense when there's only one type of thing to teach / imitate. But some things are easier to teach and imitate than others (e. g. catching a fish vs. building a house). And while there may be an abrupt jump in the ability to teach or imitate each particular skill, this argument doesn't show that there will be a jump in the number of skills that can be taught /imitated. (Which is what matters)

Covid Covid Covid Covid Covid 10/29: All We Ever Talk About

Right, to be clear that's the sort of number I have in mind and wouldn't call far far lower.

Covid Covid Covid Covid Covid 10/29: All We Ever Talk About

the infection fatality rate is far, far lower [now]

 

Just registering that, based on my reading of people who study the IFR over time, this is a highly contentious claim especially in the US.

interpreting GPT: the logit lens

Are these known facts? If not, I think there's a paper in here.

Will OpenAI's work unintentionally increase existential risks related to AI?
But what if they reach AGI during their speed up?

I agree, but I think it's unlikely OpenAI will be the first to build AGI.

(Except maybe if it turns out AGI isn't economically viable).

Will OpenAI's work unintentionally increase existential risks related to AI?

OpenAI's work speeds up progress, but in a way that's likely smooth progress later on. If you spend as much compute as possible now, you reduce potential surprises in the future.

Are we in an AI overhang?

Last year it only took Google Brain half a year to make a Transformer 8x larger than GPT-2 (the T5). And they concluded that model size is a key component of progress. So I won't be surprised if they release something with a trillion parameters this year.

Delegate a Forecast

I'm not sure if a probability counts as continuous?

If so, what's the probability that this paper would get into Nature (main journal) if submitted? Or even better, how much more likely is it to get into The Lancet Public Health vs Nature? I can give context by PM. https://doi.org/10.1101/2020.05.28.20116129

Load More