LESSWRONG
LW

scott loop
14160
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
2scott loop's Shortform
3y
1
scott loop's Shortform
scott loop3y10

Which is more likely to result in a catastrophic outcome: a world war to reach AGI or AGI itself?

I believe a future event that results in a catastrophic loss of human life is much more likely to occur from a world war where factions race to ensure their side gets to AGI first than the likelihood of AGI itself causing a mass loss of human life. I think it’s safe to say the worst case AGI outcome would be much worse than this potential WWIII (no humans left vs some/few) but it seems there is very little discussion about preventing the war to reach AGI scenario compared to AGI safety. LW probably wouldn’t be the place for those discussions (maybe foreign policy think tanks, I’m not sure) but I was curious as to what other users here felt about which had higher odds of an existential threat. 

I’ve had this notion for a while but what spurned me to post this was the US govt blocking Nvidia from providing AI chips that could be used by the Chinese government and what that signals for US willingness to militarily defend Taiwan.

Reply
[Linkpost] Solving Quantitative Reasoning Problems with Language Models
scott loop3y30

I think that is right call. Anecdotal bad outputs would probably go viral and create media firestorm with the stochastic parrots twitter crowd beating them over the head along the way. Not sure you can ever get it perfect but they should probably get close before releasing public.

Reply
Contra EY: Can AGI destroy us without trial & error?
scott loop3y70

Very nice post. One comment I'd add is that I have always been under the assumption by the time AGI is here many of the things you say it would need time to create humans will have already achieved. I'm pretty sure we will have fully automated factories, autonomous military robots that are novel in close quarters, and near perfect physics simulations, etc by the time AGI is achieved. 

Take the robots here for example. I think an AGI could potentially start making rapid advancements with the robots shown here: https://say-can.github.io/ 

15-20 years from now do you really think an AGI would need do much alteration to the top Chinese or American AI technologies?

Reply
AGI Safety FAQ / all-dumb-questions-allowed thread
scott loop3y10

Thanks for the response. Definitely going to dive deeper into this.

Reply
AGI Safety FAQ / all-dumb-questions-allowed thread
scott loop3y10

Thank you for these videos.

Reply
AGI Safety FAQ / all-dumb-questions-allowed thread
scott loop3y80

Total noob here so I'm very thankful for this post. Anyway, why is there such certainty among some that a superintelligence would kill it's creators that are zero threat to it? Any resources on that would be appreciated. As someone who loosely follows this stuff, it seems people assume AGI will be this brutal instinctual killer which is the opposite of what I've guessed.

Reply
No wikitag contributions to display.
2scott loop's Shortform
3y
1