From this article by Zoltan Istvan, in regards to the looming Global AI Arms race, he says:
As the 2016 US Presidential candidate for the Transhumanist Party, I don't mind going out on a limb and saying the obvious: I also want AI to belong exclusively to America. Of course, I would hope to share the nonmilitary benefits and wisdom of a superintelligence with the world, as America has done for much of the last century with its groundbreaking innovation and technology. But can you imagine for a moment if AI was developed and launched in, let's say, North Korea, or Iran, or increasingly authoritarian Russia? What if another national power told that superintelligence to break all the secret codes and classified material that America's CIA and NSA use for national security? What if this superintelligence was told to hack into the mainframe computers tied to nuclear warheads, drones, and other dangerous weaponry? What if that superintelligence was told to override all traffic lights, power grids, and water treatment plants in Europe? Or Asia? Or everywhere in the world except for its own country? The possible danger is overwhelming.
Now, to some extent I expect many Americans, on reflection, would at least partly agree with the above statement - and that should be concerning.
Consider the issue from the perspective of Russian, Chinese (or really any foreign) readers with similar levels of national pride.
One equivalent postionally reflected statement from a foreign perspective might read like this:
I also want AI to belong exclusively to China. Of course, I would hope to share the nonmilitary benefits and wisdom of a superintelligence with the world, as China has done for much of the this century with its groundbreaking innovation and technology. But can you imagine for a moment if AI was developed and launched by, let's say, the US NSA, or Israel, or India? . ..
On a related note, there was an interesting panel recently with Robin Li (CEO of Baidu), Bill Gates, and Elon Musk. They spent a little time discussing AI superintelligence. Robin Li mentioned that his new head of research - Andrew Ng - doesn't believe superintelligence is an immediate threat. In particular Ng said: "Worrying about AI risk now is like worrying about overpopulation on Mars." Li also mentioned that he has been advocating for a large chinese government investment in AI.