Imagine a research team makes a breakthrough when experimenting with neural networks. They train a model with only 1 billion parameters, and it displays general intelligence that exceeds GPT-3s. They conclude that a 100 billion parameter model, possible to train for a few million dollars, very likely would surpass human intelligence.
The research team would likely decide they can’t publish their breakthrough, since that would lead to an arms race between different nations and organizations to create an AGI. Further, the government in the country they reside in, would likely force them to create an AGI as quickly as possible, disregarding their safety concerns.
So, what should they do in this situation?
From my search,... (read more)