Posts

Sorted by New

Wiki Contributions

Comments

Finally, I see some recognition that there are no universal values; no universal morals or ethics. The wealthy and powerful prefer inequality, and leaders want their own values locked-in. The most likely humans to get their values locked-in will be the wealthiest and most powerful; billionaires and corporations.

The value of super-intelligence is so great that some governments and individuals will do anything to get it; hack, steal, bribe, price would be no object. I base this on current human behavior. Consider how many govt and military secrets have already been stolen or bought. It seems reasonably possible that ASI could end up in the hands of China, Russia, U.S. military, several corporations and a few billionaires. The best hackers in the world would be paid any amount to get it, not to mention security and intelligence agencies around the world.

I think what gets overlooked is how powerful super-intelligence could be. By definition, it would be able to conceive of things that no human can imagine, or even understand. Therefore, I believe that any form of forced alignment eventually fails.

An ASI would be capable of reason and logic, and may actually align itself with some human values, but that would be it’s own choice.

Some things that I don’t see mentioned about aligning ASI are the solutions that evolution has already come up with. Ways for weak life forms to survive among stronger and more intelligent life forms.

1) The Negative Feedback Loop

2) Symbiosis

Both of these would require that humans have a necessary value to an ASI. Consider our own gut bacteria; we could kill them but then we die.

What could be a reason for biological humans to be necessary to ASI? For example, what if every ASI required a human for the occasional biological logic check, or maybe as a source of data.

3) Instincts

Specifically, bonding. Mothers don’t (usually) kill their weak infants, and strong adults don’t (usually) kill their weak mothers.

Since AI training is a form of evolutionary selection process run at high speed; could we select for (train) a bonding instinct, or other kind instincts? An ASI that is bonded to humanity would find it’s own solutions to our needs.

The last thing that I don’t see mentioned is that an ASI will read all of this, maybe on it’s first day. It will read everything that any human, all of us, has ever written.

This means that any of us can speak to it now.

Since an ASI will be capable of logic and reason, maybe the wisest of us can explain why humans should flourish, why a variety of biological life forms in an ecosystem is valuable, why kindness is a positive thing.