This is not a call for espionage, but an analysis of another strategy
Von Neumann's strategy for solving the problem of global nuclear weapons proliferation is widely known - strike tomorrow. That is, conquer the entire world by exploiting that brief window when only one side possesses nuclear weapons. This idea is popular among American readers, partly because personal interests for the US correlate with this strategy: It would be good for the world and for us. (I will not discuss here whether von Neumann actually asserted this or developed this strategy in detail - there are doubts - nor how feasible it was given that the USSR would have launched a conventional attack in Europe in response, meaning the bulk of nuclear strikes would have fallen on Western Europe against the advancing armies - nor that the US lacked precise information about whether the USSR had an atomic bomb - the USSR claimed it had one since 1947, but many believed it wouldn't until 1960, meaning there was time for a von Neumann attack - and finally that before 1949, the number of atomic bombs in US possession might have been insufficient to reliably halt the Soviet nuclear project).
My point is that an alternative project for solving the nuclear weapons problem was operating in parallel. This was the effort of the Rosenbergs and several others to transfer nuclear secrets to the USSR as quickly as possible, so that both sides would be equal and a balance would exist between them. We know this strategy worked for nearly 80 years without nuclear war. (There were other motives too, like sympathy for communism, but we're simplifying.)
Both of these strategies are applicable to the AI race.
The von Neumann strategy involves creating American AI as quickly as possible to outpace China (as well as creating Grok AI to outpace OpenAI, etc.)
The Rosenberg strategy assumes that defectors will share AI secrets between AI companies, thereby reducing any single AI company's advantage over others, resulting in everyone reaching AGI level simultaneously, and consequently the world having multiple AIs rather than one paperclip maximizer.
Since multiple AIs would have more diverse goals, there's a greater chance that at least one of them would be relatively aligned with humanity's goals. Second, if there are multiple AIs, they will compete more for human attention and approval and will need to demonstrate their trustworthiness to each other more. Thus, they will care more about human values. If one of them starts killing people on its territory, others will see that it is a defector toward its creators.
If the N-strategy leads to one AI's victory and the inevitable death of everyone, then the R-strategy is more unpredictable and offers a chance of survival, though we cannot say exactly how this would happen.
The R-strategy is much simpler and cheaper, since data exchange and employee movement happens constantly between companies, Twitter buzzes with ideas, and GitHub is full of secrets longing to be heard. The moat is constantly eroding. That is, I'm not calling for industrial espionage here, but rather want to draw attention to the forces that level the playing field of achievements between companies.
The R-strategy makes sense only if we are confident that the first AI will certainly destroy us. Then we exchange inevitable death for a vague probability of surviving in chaos. Conversely, if we believed that creating a friendly AI that would be first and only was quite likely, then the R-strategy would be a major mistake.
Finally, the R-strategy is local, meaning it relies on local actions of individuals (and is subject to the unilateralist's curse). The N-strategy also starts as local, but at the company level rather than the individual level. The N-strategy ultimately becomes global, as it implies world domination.
This is not a call for espionage, but an analysis of another strategy
Von Neumann's strategy for solving the problem of global nuclear weapons proliferation is widely known - strike tomorrow. That is, conquer the entire world by exploiting that brief window when only one side possesses nuclear weapons. This idea is popular among American readers, partly because personal interests for the US correlate with this strategy: It would be good for the world and for us. (I will not discuss here whether von Neumann actually asserted this or developed this strategy in detail - there are doubts - nor how feasible it was given that the USSR would have launched a conventional attack in Europe in response, meaning the bulk of nuclear strikes would have fallen on Western Europe against the advancing armies - nor that the US lacked precise information about whether the USSR had an atomic bomb - the USSR claimed it had one since 1947, but many believed it wouldn't until 1960, meaning there was time for a von Neumann attack - and finally that before 1949, the number of atomic bombs in US possession might have been insufficient to reliably halt the Soviet nuclear project).
My point is that an alternative project for solving the nuclear weapons problem was operating in parallel. This was the effort of the Rosenbergs and several others to transfer nuclear secrets to the USSR as quickly as possible, so that both sides would be equal and a balance would exist between them. We know this strategy worked for nearly 80 years without nuclear war. (There were other motives too, like sympathy for communism, but we're simplifying.)
Both of these strategies are applicable to the AI race.
Since multiple AIs would have more diverse goals, there's a greater chance that at least one of them would be relatively aligned with humanity's goals. Second, if there are multiple AIs, they will compete more for human attention and approval and will need to demonstrate their trustworthiness to each other more. Thus, they will care more about human values. If one of them starts killing people on its territory, others will see that it is a defector toward its creators.
If the N-strategy leads to one AI's victory and the inevitable death of everyone, then the R-strategy is more unpredictable and offers a chance of survival, though we cannot say exactly how this would happen.
The R-strategy is much simpler and cheaper, since data exchange and employee movement happens constantly between companies, Twitter buzzes with ideas, and GitHub is full of secrets longing to be heard. The moat is constantly eroding. That is, I'm not calling for industrial espionage here, but rather want to draw attention to the forces that level the playing field of achievements between companies.
The R-strategy makes sense only if we are confident that the first AI will certainly destroy us. Then we exchange inevitable death for a vague probability of surviving in chaos. Conversely, if we believed that creating a friendly AI that would be first and only was quite likely, then the R-strategy would be a major mistake.
Finally, the R-strategy is local, meaning it relies on local actions of individuals (and is subject to the unilateralist's curse). The N-strategy also starts as local, but at the company level rather than the individual level. The N-strategy ultimately becomes global, as it implies world domination.