Malicious non-state actors and AI safety
Here, I discuss the possibility of malicious non-state actors causing catastrophic suffering or existential risk. This may be a very significant, but neglected issue. Consider the sort of person who becomes a mass-shooter. They're malicious. They're willing to incur large personal costs to cause large amounts of suffering. However, mass-shootings...
Could you explain how you come to this conclusion? What do you think your fundamental roadblock would be? Getting the code for AGI or beating everyone else to superintelligence?]
It's important to remember that there may be quite a few people who would act... (read more)