Which is more likely to result in a catastrophic outcome: a world war to reach AGI or AGI itself?
I believe a future event that results in a catastrophic loss of human life is much more likely to occur from a world war where factions race to ensure their side gets to AGI first than the likelihood of AGI itself causing a mass loss of human life. I think it’s safe to say the worst case AGI outcome would be much worse than this potential WWIII (no humans left vs some/few) but it seems there is very little discussion about preventing the war to reach AGI scenar... (read more)
I think that is right call. Anecdotal bad outputs would probably go viral and create media firestorm with the stochastic parrots twitter crowd beating them over the head along the way. Not sure you can ever get it perfect but they should probably get close before releasing public.
Very nice post. One comment I'd add is that I have always been under the assumption by the time AGI is here many of the things you say it would need time to create humans will have already achieved. I'm pretty sure we will have fully automated factories, autonomous military robots that are novel in close quarters, and near perfect physics simulations, etc by the time AGI is achieved. Take the robots here for example. I think an AGI could potentially start making rapid advancements with the robots shown here: https://say-can.github.io/ 15-20 years from now do you really think an AGI would need do much alteration to the top Chinese or American AI technologies?
Thanks for the response. Definitely going to dive deeper into this.
Thank you for these videos.
Total noob here so I'm very thankful for this post. Anyway, why is there such certainty among some that a superintelligence would kill it's creators that are zero threat to it? Any resources on that would be appreciated. As someone who loosely follows this stuff, it seems people assume AGI will be this brutal instinctual killer which is the opposite of what I've guessed.