If your goal is to get to your house, there is only one thing that will satisfy the goal: being at your house. There is a limited set of optimal solutions that will get you there. If your goal is to move as far away from your house as possible, there are infinite ways to satisfy the goal and many more solutions at your disposal.
Natural selection is a "move away" strategy, it only seeks to avoid death, not go towards anything in particular, making the possible class of problems it can solve much more open ended. Gradient Descent is a "move towards" strategy, if there is a solution that would help it reach a goal but it's not within the target direction, it mostly won't reach it without help or modification. This is why the ML industry is using evolutionary algorithms to solve global optimisation problems that GD cannot solve. The random search / brute force nature of evolution is inherently more versatile and is a well known limitation of GD.
Gradient descent by default would just like do, not quite the same thing, it's going to do a weirder thing, because natural selection has a much narrower information bottleneck. In one sense, you could say that natural selection was at an advantage, because it finds simpler solutions.
This is silly because it's actually the exact opposite. Gradient descent is incredibly narrow. Natural selection is the polar opposite of that kind of optimisation: an organism or even computer can come up with a complex solution to any and every problem given enough time to evolve. Evolution fundamentally overcomes global optimisation problems that are mathematically impossible for gradient descent to overcome without serious modifications, possibly not even then. It is the 'alkahest' of ML, even if it is slow and not as popular.
If AI behaves identically to me but our internals are different, does that mean I can learn everything about myself from studying it? If so, the input->output pipeline is the only thing that matters, and we can disregard internal mechanisms. Black boxes are all you need to learn everything about the universe, and observing how the output changes for every input is enough to replicate the functions and behaviours of any object in the world. Does this sound correct? If not, then clearly it is important to point out that the algorithm is doing Y and not X.
AIs that are superhuman at just about any task we can (or simply bother to) define a benchmark, for
This is just a false claim. Seriously, where is the evidence for this? We have AIs that are superhuman at any task we can define a benchmark for? That's not even true in the digital world let alone in the world of mechatronic AIs. Once again i will be saving this post and coming back to it in 5 years to point out that we are not all dead. This is getting ridiculous at this point.
If the Author believes what they've written then they clearly think that it would be more dangerous to ignore this than to be wrong about it, so I can't really argue that they shouldn't be person number 1. It's a comfortable moral position you can force yourself into though. "If I'm wrong, at least we avoided total annihilation, so in a way I still feel good about myself".
I see this particular kind of prediction as a kind of ethical posturing and can't in good conscience let people make them without some kind of accountability. People have been paid millions to work on predictions similar to these. If they are wrong, they should be held accountable in proportion to whatever cost they have have incurred on society, big or small, financial or behavioural.
If wrong, I don't want anyone brushing these predictions off as silly mistakes, simple errors in models, or rationalising them away. "That's not actually what they meant by AGI", or "It was better to be wrong than say nothing, please keep taking me seriously". Sometimes mistakes are made because of huge fundamental errors in understanding across the entire subject and we do need a record of that for reasons more important than fun and games, so definitely be the first kind of person but, you know, people are watching is all.
I have saved this post on the internet archive[1].
If in 5-15 years, the prediction does not come true, i would like it to be saved as evidence of one of the many serious claims that world-ending AI will be with us in very short timelines. I think the author has given more than enough detail on what they mean by AGI, and has given more than enough detail on what it might look like, so it should be obvious whether or not the prediction comes true. In other words, no rationalising past this or taking it back. If this is what the author truly believes, they should have a permanent record of their abilities to make predictions.
I encourage everyone to save posts similar to this one in the internet archive. The AI community, if there is one, is quite divided on issues like these, and even among groups that are in broad agreement there are disagreements on details. It will be very useful to have a public archive of who made what claims so we know who to avoid and who to take seriously.
[1] https://web.archive.org/web/20221020151610/https://www.lesswrong.com/posts/K4urTDkBbtNuLivJx/why-i-think-strong-general-ai-is-coming-soon