That's outside the scope of this post, and also it's an open research question. I agree with you mathematically that there's considerable uncertainty. I used the word "evidence" to simply mean that we haven't found general polynomial time algorithms yet.
Maybe I didn't articulate my point very well. These problems contain a mix of NP-hard compute requirements and subjective judgements.
Packing is sometimes a matter of listing everything in a spreadsheet and then executing a simple algorithm on it, but sometimes the spreadsheet is difficult to fully specify.
Playing Pokemon well does not strike me as an NP-hard problem. It contains pathfinding, for which there are efficient solutions, and then mostly it is well solved with a greedy approach.
I mostly agree with Gwern; they're very right that there are ways around complexity classes through cleverness and solving a different problem, or else by scaling hardware and writing programs more efficiently.
They conclude their essay to say:
at most, they demonstrate that neither humans nor AIs are omnipotent
and I think that's basically what I was trying to get at. Complexity provides a soft limit in some circumstances, and it's helpful to understand which points in the world impose that limit and which don't.
I love this idea but I have the logistics concern that it might be difficult-to-impossible to reserve the island for the time we want.
Small edit: $100,000,000 dollars per human genome to $1,000 dollars per genome is 5 orders of magnitude, not 6.
I'm looking forward to Part 2! (and Part 3?)
Yes, this is basically what people are doing.
For those who don't want to break out a calculator, Wikipedia has it here:
https://en.wikipedia.org/wiki/Equal_temperament#Comparison_with_just_intonation
You can see the perfect fourth and perfect fifth are very close to 4/3 and 3/2 respectively. This is basically just a coincidence and we use 12 notes per octave because there are these almost nice fractions. A major scale uses the 2212221 pattern because that hits all the best matches with low denominators, skipping 16/15 but hitting 9/8, for example.
Imagine you had an oracle which could assess the situation an agent is in and produce a description for an ML architecture that would correctly "solve" that situation.
I think for some strong versions of this oracle, we could create the ML component from the architecture description with modern methods. I think this combination could effectively act as AGI over a wide range of situations, again with just modern methods. It would likely be insufficient for linguistic tasks.
I think that's what this article is getting at. The author is an individual from Uber. Does anyone know if this line of thinking has other articles written about it?
I have a question about this chart. It seems like you fit a double-exponential function to it (dashed line), because the y-axis is exponential. Why didn't you fit a regular exponential to it, like the red line? It seems like if you do that, the projections for the whole article should be years slower.
Overall, great article. It changed my mind in the direction of making this timeline feel more plausible.