I suspect we are close to a generality threshold necessary for accelerating recursive self-improvement. I just hope that if some AI lab crosses this boundary that they have the wisdom to slow down and work on alignment, not hammer down the accelerator in hopes of fame and money.
I hope we will see multi-modal modells. To me it will be very important for the future of AI/AGI development to see whether we will get significant transfer/synergy-learning (from one modality to another).
IF these synergy effects would occur (and my prediction is that they will not occur to a significant degree), this would push my AI predictions towards shorter time horizons for useful AI and AGI.
If we wont see transfer learning it would significantly lengthen my AGI time horizon because it would indicate to me that we will have to come up with additional and new techniques to enhance our AI models towards generality.
hope you brought your spaghetti sauce
I don't get the joke tbh
my timelines are "we're gonna make some amount of PASTA in 2023, pretty much no matter what". I am still not sure if the big one gets trained, but there are more and more medium-large ones that will be incredibly impactful. "chatgpt but for biochem" type of stuff.
Any comments now that it’s out?