One adversarial prior would be "my prior for bets of expected value X is O(1/X)".
No public estimates, but the difficulty of self driving cars definitely pushed my AGI timelines back. In 2018 I predicted full self driving by 2023; now that’s looking unlikely. Yes, the advance in text and image understanding and generation has improved a lot since 2018, but instead of shortening my estimates that’s merely rotated which capabilities will come online earlier and which will wait until AGI.
However, I expect some crazy TAI in the next few years. I fully expect “solve all the millennium problems” to be doable without AGI, as well as much of coding/design/engineering work. I also think it’s likely that text models will be able to do the work of a paralegal/research assistant/copywriter without AGI.
Additionally, if you have a problem which can be solved by either (a) crime or (b) doing something complicated to fix it, your ability to do (b) is higher the smarter you are.
It would be nice to have a couple examples comparing concrete distributions Q and P and examining their KL-divergence, why it's large or small, and why it's not symmetric.
If I were making one up, I might say "g distributes over composition of f".
It would be better if it was merely an organization that merely had contradictory goals (maybe a degrowth anarcho-socialist group? A hardcore anti-science christian organization?) but wasn't organized around the dislike of our group specifically.
More likely, the AI just finds a website with a non-compliant GET request, or a GET request with a SQL injection vulnerability.
I agree that before that point, an AI will be transformative, but not to the point of “AGI is the world superpower”.
That’s what people used to say about chess and go. Yes, mathematics requires intuition, but so does chess; the game tree’s too big to be explored fully.
Mathematics requires greater intuition and has a much broader and deeper “game” tree, but once we figure out the analogue to self-play, I think it will quickly surpass human mathematicians.
GPT-4 (Edited because I actually realize I put way more than 5% weight on the original phrasing): SOTA on language translation for every language (not just English/French and whatever else GPT-3 has), without fine-tuning.
Not GPT-4 specifically, assuming they keep the focus on next-token prediction of all human text, but "around the time of GPT-4": Superhuman theorem proving. I expect one of the millennium problems to be solved by an AI sometime in the next 5 years.