I don't feel like passive systemic threats to AI to perform or die will yield a "less motivated to harm humanity" model.
I don't feel like passive systemic threats to AI to perform or die will yield a "less motivated to harm humanity" model.