New Answer
Ask Related Question
New Comment

3 Answers sorted by

I suspect we are close to a generality threshold necessary for accelerating recursive self-improvement. I just hope that if some AI lab crosses this boundary that they have the wisdom to slow down and work on alignment, not hammer down the accelerator in hopes of fame and money.

GPT-4 will blow our minds in terms of what the limitations of AI are.

Any comments now that it’s out?

1Michael Cheers10d
On the right path but not as rad as I thought it would be. I heard GPT-4 only has around 80 billion parameters so a much bigger scale up is necessary to really blow our minds like I had hoped if it had 10 trillion parameters. Anyone else have any thoughts?
2the gears to ascension10d
where did you hear 80 billion params?

I hope we will see multi-modal modells. To me it will be very important for the future of AI/AGI development to see whether we will get significant transfer/synergy-learning (from one modality to another). 
IF these synergy effects would occur (and my prediction is that they will not occur to a significant degree), this would push my AI predictions towards shorter time horizons for useful AI and AGI. 
If we wont see transfer learning it would significantly lengthen my AGI time horizon because it would indicate to me that we will have to come up with additional and new  techniques to enhance our AI models towards generality. 

3 comments, sorted by Click to highlight new comments since: Today at 5:28 PM

hope you brought your spaghetti sauce

I don't get the joke tbh

my timelines are "we're gonna make some amount of PASTA in 2023, pretty much no matter what". I am still not sure if the big one gets trained, but there are more and more medium-large ones that will be incredibly impactful. "chatgpt but for biochem" type of stuff.