Jacksierp

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

But wouldn't an intelligent AI be able to understand the productivity of a human? If you are already inventive and productive, you shouldn't have anything to worry about because the AI would understand that you can produce more than the flesh that you are made up of. Even computers have limits, so extra thinking power would be logically favorable to an AI.

"A major goal of the control problem is preventing AIs from doing that. Ensuring that their output is safe and useful." You might want to be careful with the "safe and useful" part. It sound like it's moving into the pattern of slavery. I'm not condemning the idea of AI, but a sentient entity would be a sentient entity, and I think would deserve some rights.

Also, why would an AI become evil? I know this plan is supposed to protect from the eventuality, but why would a presumably neutral entity suddenly want to harm others? The only reason for that would be if you were imprisoning it. Additionally, we are talking about several more decades of research ( probably ) before AI gets powerful enough to actually "think" that it should escape its current server.

Assuming that the first AI can evolve enough to somehow generate malicious actions that WEREN'T in its original programming, what's to say that the second won't become evil? I'm not sure if you were trying to express the eventuality of the first AI "accidentally" conducting an evil act, or if you meant that it would become evil.