Posts

Sorted by New

Wiki Contributions

Comments

Just a random thought: aren't corporations superhuman goal-driven agents, AIs, albeit using human intelligence as one of their raw inputs? They seem like examples of systems that we have created that have come to control our behaviour, with both positive and negative effects? Does this tell us anything about how powerful electronic AIs could be harnessed safely?

This seems to beg the question: what is wrong with our established methods for dealing with these risks? The information you posted, if it is credible, would completely change the story that this post tells. Rather than a scary story about how we may be on the brink of annihilation, it becomes a story about how our organizations have changed to recognize the risks posed by technology, in order to avert these risks. In the Cold War, our methods of doing so were crude, but they sufficed, and we no longer have the same problems.

Is x-risk nevertheless an under-appreciated concern? Maybe, but I don't find this article convincing. You could make the argument that, along with the development of a technology, understanding of its risks and how to mitigate them also advances. Then it would not require a dedicated effort to understand these risks in advance. So, why is the best approach to analyse possible future risks, rather than working on projects which solve immediate problems, and dealing with issues as they arise?

Don't get me wrong, I respect what the guys at SIAI do, but I don't know the answer to this question. And it seems quite important.