Jacob Pfau

Posts

Sorted by New

Comments

AGI Predictions

Great post! I am very curious about how people are interpreting Q10 and Q11, and what their models are. What are prototypical examples of 'insights on a similar level to deep learning'? 

Here's a break-down of examples of things that come to my mind:

Historical DL-level advances: 

  • the development of RL (Q-learning algorithm, etc.)
  • Original formulation of a single neuron i.e. affine transformation + non-linearity

Future possible DL-level:

  • a successor to back-prop (e.g. the how biological neurons learn)
  • a successor to the Q-learning family (e.g. neatly generalizing and extending 'intrinsic motivation' hacks)
  • full brain simulation
  • an alternative to the affine+activation recipe

Below DL-level major advances:

  • an elegant solution to learn from cross-modal inputs in a self-supervised fashion (babies somehow do it)
  • a breakthrough in active learning
  • a generalizable solution to learning disentangled and compositional representations
  • a solution to adversarial examples

Grey areas: 

  • breakthroughs in neural architecture search
  • a breakthrough in neural Turing machine-type research

I'd also like to know how people's thinking fits in with my taxonomy: Are people who leaned yes on Q11 basing their reasoning on the inadequacy of the 'below DL-level advances' list, or perhaps on the necessity of the 'DL-level advances' list? Or perhaps people interpreted those questions completely differently, and don't agree with my dividing lines?

Competition: Amplify Rohin’s Prediction on AGI researchers & Safety Concerns

The above estimate was mislead since I had mistakenly read ' I then compute the fraction #answers(Yes, Yes, Yes) / #answers(Yes, *, *) ' as ' I then compute the fraction #answers(Yes, Yes, Yes) / #answers(Yes, Yes, *)'.

I agree with Ethan's recent comment that experience with RL matter a lot, so a lot comes down to how the ' Is X's work related to AGI? ' criterion is cashed out. On some reading of this, many NLP researchers do not count, on another reading they do count. I'd say my previous prediction was a decent, if slightly over-estimate of the scenario in which 'related to AGI' is interpreted narrowly, and many NLP researchers are ruled out.

A second major confounder is whether prominent AI researchers are far more likely to have been asked about their opinion on AI safety in which case they have some impetus to go read up on the issue.


To cash some of these concerns out into probabilities:

75% that Rohin takes a broad interpretation of AGI which includes e.g. GPT-team, NAS research etc.

33% estimated (Yes,Yes,Yes) by assuming prominent researchers 2x as likely to have read up on AI safety.

25% downweighted from 33% taking into account industry being less concerned.

Assuming that we're at ~33% now, 50% doesn't seem too far out of reach, so my estimates for following decades are based on the same concerns I listed in my above comment framed with the 33% in mind.

Updated personal distribution: elicited

Updated Rohin's posterior: elicited


(Post competition footnote: seems to me over short time horizons we should have a more-or-less geometric distribution. Think of the more-or-less independent per year chance that a NeurIPS keynote features AI safety, or youtube recommender algorithm goes bonkers for a bit. Seems strange to me that some other people's distribution over the next 10-15 years -- if not longer -- do not look geometric.)

Competition: Amplify Rohin’s Prediction on AGI researchers & Safety Concerns

My old prediction for when the fraction be >= 0.5: elicited

My old prediction for Rohin's posterior: elicited

I went through the top 20 list of most cited AI researchers on google scholar (thanks to Amanda for linking), and estimated that roughly 9 of them may qualify under Rohin's criterion. Of those 9, my guess was that 7/9 would answer 'Yes' on Rohin's question 3.

My sampling process was certainly biased. For one, AI researchers are likely to be more safety conscious than industry experts. My estimate also involved considerable guesswork, so I down-weighted the estimated 7/9 to a 65% chance that the >=0.5 threshold will be met within the first couple years. Given the extreme difference between my distribution and the others posted, I guess there's a 1/3 chance that my estimate based on the top 20 sampling will carry significant weight in Rohin's posterior.

The justification for the rest of my distribution is similar to what others have said here and elsewhere about AI safety. My AGI timeline is roughly in line with the metaculus estimate here. Before the advent of AGI, a number of eventualities are possible: a warning shot occurs, perhaps theoretical consensus will emerge, perhaps industry researchers will be oblivious to safety concerns because of a principal-agent nature to the problem, perhaps AGI will be invented before safety is worked out, etc.

Edit: One could certainly do a better job of estimating where the sample population of researchers currently stands by finding a less biased population. Maybe people interviewed by Lex Fridman, that might be a decent proxy for AGI-research-fame?