Cross-posted from the EA Forum
I'm searching for examples of self-governance efforts to reduce technology risk. Do people have cases to suggest?
The more similar to AI development the better. That is, efforts by companies or academic communities to address risks that affect third parties, with minimal involvement from governments beyond basic law and order.
Examples from academia:
- Leo Szilard and other physicists coordinating to prevent Germany from obtaining atomic bomb data, 1939-1940
- Various efforts in biotechnology:
- Asilomar conference on recombinant DNA, 1975
- Mutations Database Initiative, 1999-2001
- Synthetic biology conferences SB1.0 and SB2.0, 2006
- Biology journals discussing publication restrictions, 2001-2011
Examples from the commercial sector:
- DNA synthesis companies screening orders and buyers, 2004-2012
- Efforts by the nanotechnology companies in the US, UK and Europe, 2004-2007
The thing with artificial intelligence is that it could be used for dangerous goals, too, and for this, there's no self-organised group of companies that will do their best to prevent that technology from falling in the wrong hands, unfortunately...