interpreting GPT: the logit lens

Are these known facts? If not, I think there's a paper in here.

Will OpenAI's work unintentionally increase existential risks related to AI?
But what if they reach AGI during their speed up?

I agree, but I think it's unlikely OpenAI will be the first to build AGI.

(Except maybe if it turns out AGI isn't economically viable).

Will OpenAI's work unintentionally increase existential risks related to AI?

OpenAI's work speeds up progress, but in a way that's likely smooth progress later on. If you spend as much compute as possible now, you reduce potential surprises in the future.

Are we in an AI overhang?

Last year it only took Google Brain half a year to make a Transformer 8x larger than GPT-2 (the T5). And they concluded that model size is a key component of progress. So I won't be surprised if they release something with a trillion parameters this year.

Delegate a Forecast

I'm not sure if a probability counts as continuous?

If so, what's the probability that this paper would get into Nature (main journal) if submitted? Or even better, how much more likely is it to get into The Lancet Public Health vs Nature? I can give context by PM.

The Puzzling Linearity of COVID-19

"Why are most COVID-19 infection curves linear?

Many countries have passed their first COVID-19 epidemic peak. Traditional epidemiological models describe this as a result of non-pharmaceutical interventions that pushed the growth rate below the recovery rate. In this new phase of the pandemic many countries show an almost linear growth of confirmed cases for extended time-periods. This new containment regime is hard to explain by traditional models where infection numbers either grow explosively until herd immunity is reached, or the epidemic is completely suppressed (zero new cases). Here we offer an explanation of this puzzling observation based on the structure of contact networks. We show that for any given transmission rate there exists a critical number of social contacts, Dc, below which linear growth and low infection prevalence must occur. Above Dc traditional epidemiological dynamics takes place, as e.g. in SIR-type models. When calibrating our corresponding model to empirical estimates of the transmission rate and the number of days being contagious, we find Dc ~ 7.2. Assuming realistic contact networks with a degree of about 5, and assuming that lockdown measures would reduce that to household-size (about 2.5), we reproduce actual infection curves with a remarkable precision, without fitting or fine-tuning of parameters. In particular we compare the US and Austria, as examples for one country that initially did not impose measures and one that responded with a severe lockdown early on. Our findings question the applicability of standard compartmental models to describe the COVID-19 containment phase. The probability to observe linear growth in these is practically zero."

The ground of optimization

Seconded that the academic style really helped, particularly discussing the problem and prior work early on. One classic introduction paragraph that I was missing is "what have prior works left unaddressed?".

FHI paper on COVID-19 government countermeasures

Think of it like one-sided vs two-sided. You can have a 95% CI that overlaps with zero, like [-2, 30], because 2.5% of the probability mass is on >30 and 2.5% on <-2, but still the probability of >0 effect can be >95%. This can also happen with Frequentist CIs.

A credible interval is the Bayesian analog to a confidence interval.

FHI paper on COVID-19 government countermeasures

We have no info on that, sorry. That's because we have a single feature which is switched on when most schools are closed. Universities were closed 75% of the time when that happened IIRC.

How to do remote co-working

Yes these are also great options. I used them in the past but somehow didn't keep it up.

Co-working with a friend is good option for people like myself who benefit from having someone who expects me to be there (and who I'm socially comfortable with).

Load More