John Carmack, confirmed GOAT video game developer, is going to take a crack at AGI.
https://www.facebook.com/permalink.php?story_fbid=2547632585471243&id=100006735798590
John Carmack, confirmed GOAT video game developer, is going to take a crack at AGI.
https://www.facebook.com/permalink.php?story_fbid=2547632585471243&id=100006735798590
For those of us who have Facebook blocked or don't have an account, can you copy-paste it or summarise it? I'm curious who he'll be working with (DeepMind, OpenAI, independent, etc).
Here it is:
Thanks. And very cool. Someone should send him the AI Alignment Forum sequences, in case he wants some interesting subproblems to think about.
My mind skipped over this the first time, but hey look! He's using Eliezer's term. Interesting. Kinda sad, given that the term describes something you should never do. Not that you shouldn't work on AI, but you should work on AI because it is very likely to be a big deal, and good researchers have a large impact on how a field and engineering effort plays out. (I agree this domain is quite hard, but it's not as impossibly hard as brute-forcing a random password with a hundred ASCII characters.)
I'd imagine he was reaching for a term for "generalised pascal-like situation". Calling it a pascal's wager wouldn't work because pascal's wager proper wasn't a valid argument.
Hm I guess it is a bit sad that there isn't a term for this.