Alex123

Posts

Sorted by New

Wiki Contributions

Comments

Alex123-30

Because we are not SI, so we don't know what it will do and why. It might.

Alex123-10

I read the book several times already and it makes me more and more pessimistic. Even if we make SI to follow CEV, at some point it might decide to drop it. Its SI above all, it can find ways to do anything. Yet we can't survive without SI. So SEV proposal is as good and as bad as any other proposal. My only hope is that moral values could be as fundamental as laws of nature. So a very superintelligent AI will be very moral. Then we'll be saved. If not, then it will create Hell for all people and keep them there for eternity (meaning that even death could be a better way out yet SI will not let people die). What should we do?

It's really good. People are superintelligence to horses, and they (horses) lost 95% of jobs. With SI to people, people will loose no less % of jobs. We have to take it as something provably coming. It will be painful but necessary change. So many people spend their lives on so simple jobs (like cleaning, selling etc).

Unless somebody specifically pushes for multipolar scenario its unlikely to arise spontaneously. With our military-oriented psychology any SI will be first considered for military purposes, including prevention of SI achievement by others. However, a smart group of people or organizations might purposefully multiply instances of near-ready SI in order to create competition which can increase our chances of survival. Creating social structure of SIs might make them socially aware and tolerant, which might include tolerance to people.

Alex123-10

Maybe people shouldn't make Superintelligence at all? Narrow AIs are just fine if you consider the progress so far. Self-driving cars will be good, then applications using Big Data will find cures for most illnesses, then solve starvation and other problems by 3D printing foods and everything else, including rockets to deflect asteroids. Just give 10-20 more years only. Why to create dangerous SI?

Before "then do nothing" AI might exhaust all matter in Universe trying to prove that it made exactly 10 paperclips.