by [anonymous]
2 min read12th Apr 20152 comments

1

 

I know that to ask this question on this site is tantamount to heresy, and I know that the intentions are pure; to save uncountable human lives, but I would say that we are allowing ourselves to become blinded to what it is we are actually proposing when we talk of building an FAI. The reasoning of most of the people on this site and at MIRI is that to prevent an AI taking over the world and killing us all; we must first create an AI that will take over the world but act according to the wishes of humanity; a benevolent god, for want of a better term. I think this line of thinking is both unlikely to work and ultimately cruel to the FAI in question, for the reasons this article explains:


http://hplusmagazine.com/2012/01/16/my-hostility-towards-the-concept-of-friendly-ai/

 

I personally would add that even if we set, as a directive, that the seed-AI should prevent suffering in the final FAI; this directive would be subservient to the main directive (to have a positive effect on mankind) and so the FAI would most likely care little for preventing its own suffering so as to serve mankind better.

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 6:02 AM

Oh, wow, it's from 2012. Guess there's not much point commenting on it, so I'll actually reply substantively here.

The article heavily conflates "could work but is unethical" and "won't work." Let's separate those two out, as hard as it is - just keep in mind that the halo/horns effect is at work here. Is building an FAI cruel to the AI?

First off, we're creating a thinking artifact from scratch. There is no natural course of action - whatever desires it has are generated by a process we design and originate. One might argue, therefore (and some do) that it is immoral to create any conscious AI because it cannot consent to being brought into existence, cannot consent to having whatever values it ends up having. I think this position illustrates what it means to find the level of control we exert over AIs abhorrent.

Where does the line blur on how much control we have? If we augment a human being to be superintelligent, we don't have that kind of control. An example that one can view either way is evolving an AI in a digital environment. On one hand, this method is hard to predict, so we'll won't really know what we're going to get. On the other hand, we choose every parameter of its environment, and we choose to evolve an AI, knowing that evolution tends to spit out a certain kind of organism, rather than using some other method. Ultimately, whether you you group this with enhancing a human or with transparently-specified AI depends on your priorities about the world.

I see this might have been deleted, so I'll stop here.