Friendly AI, as believes by Hanson, is doomed to failure, since if the friendliness system is too complicated, the other AI projects generally will not apply it. In addition, any system of friendliness may still be doomed to failure - and more unclear it is, the more chances it has to fail. By fail I mean that it will not be accepted by most successful AI project. Thus, the friendliness system should be simple and clear, so it can be spread as widely as possible. I roughly figured, what principles could form the basis of a simple friendliness:

1) Any one should understood that AI can be global risks and the friendliness of the system is needed. This basic understanding should be shared by maximum number of AI-groups (I think this is alrready done)

2) Architecture of AI should be such that it would use rules explicitly. (I.e. no genetic algorithms or neural networks)

3) the AI should obey commands of its creator, and clearly understand who is the creator and what is the format of commands.

4) AI must comply with all existing criminal an civil laws. These laws are the first attempt to create a friendly AI – in the form of state. That is an attempt to describe good, safe human life using a system of rules. (Or system of precedents). And the number of volumes of laws and their interpretation speaks about complexity of this problem - but it has already been solved and it is not a sin to use the solution.

5) the AI should not have secrets from their creator. Moreover, he is obliged to inform him of all his thoughts. This avoids rebel of AI.

6) Each self optimizing of AI should be dosed in portions, under the control of the creator. And after each step must be run a full scan of system goals and effectiveness.

7) the AI should be tested in a virtual environment (such as Second Life) for safety and adequacy.

8) AI projects should be registrated by centralized oversight bodies and receive safety certification from it.

Such obvious steps do not create absolutely safe AI (you can figure out how to bypass it out), but they make it much safer. In addition, they look quite natural and reasonable so they could be use by any AI project with different variations. Most of this steps are fallable. But without them the situation would be even worse. If each steps increase safety two times, 8 steps will increase it 256 times, which is good. Simple friendliness is plan B if mathematical FAI fails.


New Comment
4 comments, sorted by Click to highlight new comments since: Today at 3:03 AM

And while we're at it with this duplicate post, I'll use this as an opportunity to point out additional issues:

If each steps increase safety two times, 8 steps will increase it 256 times, which is good. Simple friendliness is plan B if mathematical FAI fails.

This isn't true. Probabilities may not be independent of each other. And it isn't even clear that all your steps even do increase safety.

I'm incidentally annoyed that despite your decision to repost this with slightly different formatting you have not fixed the grammar or spelling errors.

You can edit your previous post; you don't need to create a duplicate.

I post again because I do no see my post in Discussion.

You don't see it on the Discussion page because it got heavily downvoted and therefore hidden. It's still there -- you can see it on the sidebar. The same, obviously, will have happened to this reposting.

New to LessWrong?