Agree with everything in this post, with your updated EOY 2029 timeline, but only based on your definition: where 95% of remote labor may be now done by AI. Don't think there is enough cross over/correlation between AI being able to do the duties of remote labor SWE roles, and automating R&D. I'd say all LLMs right now struggle with complex low level codebases (which from my understanding roles dealing with low level code are usually in office/not remote, and a lot of R&D involves low level code). I've used the best LLMs in IDE, with all the context window improvements and features of recent, and it still makes mistakes regularly, and this even happens on my own side projects, like Physics simulations, code with more complex parent-child relationships and structures, and also when you start doing multi-threaded stuff. So when developing new projects from the ground up that involve low level code, it clearly struggles, and many AI experts, Ilya Sutskever, Richard Sutton and more theorise that it may be that the only way to reach this form of AGI would be different architecture entirely. For the most part, recent improvements in AI have been from scaling, and optimisations to the models have had much less influence. This may also imply what I said above.
But, if by R&D you mean more high level prototyping, experimenting and applied ML, I'd agree completely. I just don't see such a heavy correlation between AI being able to do remote roles, automating high level R&D, and then this resulting in all definitions of AGI being met. For it to be AGI it would have to be able to do it all, then all the definitions would be met.
Agree with everything in this post, with your updated EOY 2029 timeline, but only based on your definition: where 95% of remote labor may be now done by AI. Don't think there is enough cross over/correlation between AI being able to do the duties of remote labor SWE roles, and automating R&D. I'd say all LLMs right now struggle with complex low level codebases (which from my understanding roles dealing with low level code are usually in office/not remote, and a lot of R&D involves low level code). I've used the best LLMs in IDE, with all the context window improvements and features of recent, and it still makes mistakes regularly, and this even happens on my own side projects, like Physics simulations, code with more complex parent-child relationships and structures, and also when you start doing multi-threaded stuff. So when developing new projects from the ground up that involve low level code, it clearly struggles, and many AI experts, Ilya Sutskever, Richard Sutton and more theorise that it may be that the only way to reach this form of AGI would be different architecture entirely. For the most part, recent improvements in AI have been from scaling, and optimisations to the models have had much less influence. This may also imply what I said above.
But, if by R&D you mean more high level prototyping, experimenting and applied ML, I'd agree completely. I just don't see such a heavy correlation between AI being able to do remote roles, automating high level R&D, and then this resulting in all definitions of AGI being met. For it to be AGI it would have to be able to do it all, then all the definitions would be met.