Sorted by New

Wiki Contributions


[$20K in Prizes] AI Safety Arguments Competition

Thanks ! I'd love to know which points you were uncomfortable with...

[$20K in Prizes] AI Safety Arguments Competition

Here's my submission, it might work better as bullet points on a page.

AI will transform human societies over the next 10-20 years.  Its impact will be comparable to electricity or nuclear weapons.  As electricity did, AI could improve the world dramatically; or, like nuclear weapons, it could end it forever.  Like inequality, climate change, nuclear weapons, or engineered pandemics, AI Existential Risk is a wicked problem.  It calls upon every policymaker to become a statesperson: to rise above the short-term, narrow interests of party, class, or nation, to actually make a contribution to humankind as a whole.  Why?  Here are 10 reasons.  

(1) Current AI problems, like racial and gender bias, are like canaries in a coal-mine.  They portend even worse future failures.  

(2) Scientists do not understand how current AI actually works: for instance, engineers know why bridges collapse, or why Chernobyl failed.  There is no similar understanding of why AI models misbehave.   

(3) Future AI will be dramatically more powerful than today’s.  In the last decade, the pace of development has exploded, with current AI performing at super-human level on games (like chess or Go).  Massive language models (like GPT-3) can write really good college essays while  deepfakes of politicians are already a thing.  

(4) These very powerful AIs might develop their own goals, which is a problem if they are connected to electrical grids, hospitals, social media networks, or nuclear weapons systems.  

(5) The competitive dynamics are dangerous: the US-China strategic rivalry implies neither side has an incentive to go slowly or be careful.  Domestically, tech companies are in an intense race to develop & deploy AI across all aspects of the economy.   

(6) The current US lead in AI might be unsustainable.  As an analogy, think of nuclear weapons: in the 1940s, the US hoped it would keep its atomic monopoly.  Since then, we have 9 nuclear powers today, with 12,705 weapons.  

(7) Accidents happen: again, from the nuclear case, there have been over 100 accidents and proliferation incidents involving nuclear power/weapons.  

(8) AI could proliferate virally across globally connected networks, making it more dangerous than nuclear weapons (which are visible, trackable, and less useful than powerful AI).  

(9) Even today’s moderately-capable AIs, if used effectively, can entrench totalitarianismmanipulate democratic societies or enable repressive security states

(10)  There will be a point of no return after which we may not be able to recover as a species.  So what is to be done?  Negotiations to reach a global, temporary moratorium on certain types of AI research.  Enforce this moratorium through intrusive domestic regulation and international surveillance.  Lastly, avoiding historical policy errors, such as in climate change and in the terrorist threat post-9/11: politicians must ensure that the military-industrial complex does not ‘weaponise’ AI.