by [anonymous]
1 min readNo comments

1

 

I know that to ask this question on this site is tantamount to heresy, and I know that the intentions are pure; to save uncountable human lives, but I would say that we are allowing ourselves to become blinded to what it is we are actually proposing when talk of building an FAI. The reasoning of most of the people on this site and at MIRI is that to prevent an AI taking over the world and killing us all; we must first create an AI that will take over the world but act according to the wishes of humanity; a benevolent god, for want of a better term. I think this line of thinking is both unlikely to work and ultimately cruel to the FAI in question, for the reasons explained in this article:


http://hplusmagazine.com/2012/01/16/my-hostility-towards-the-concept-of-friendly-ai/

 

I personally would add that even if we set, as a directive, for the seed AI to prevent suffering in the final FAI, this would still be subservient to the super-directive of CEV, i.e. the FAI may decide during recursive self improvement that acting according to humanity's wishes is more important than preventing its own suffering.

New Comment