LESSWRONG
LW

1411
Wikitags

AI alignment

Discuss the wikitag on this page. Here is the place to ask questions and propose changes.
New Comment
2 comments, sorted by
top scoring
[-]simonlukic@gmail.com1mo00

The question of alignment seems to be an impossible one to answer. When we say that we want AGI to "share human values" we have the problem of defining human values. These clearly vary through time and space. At some points in time and in certain places, an honorable and decent man might feel obliged to kill another man who had insulted him. My father, grandfather and great-grand father all tried to kill people they had never met and with whom they had no personal quarrel. This was very normal in the world wars. Not only was it normal to attempt to murder strangers, it was encouraged, compulsory even. 

In the late 20th century, Jaguar motor cars ran an advert, boasting of their walnut trim and printing next to the shiney beast the following bit of folklore:

"Your dog, your wife, your walnut tree, The more you beat them, the better they be." 

These are not the values most of us share today but they were being used to sell cars in the nineties. 

Are the values of the peoples of the Amazon, of the Congo, of the steppe and the desert to be inculcated into the AI? What about those of your kooky cousin or lunatic uncle? 

Azimov's appalling 3 laws come up now and again and yet a moments thought reveals that, being forbidden through inaction to allow a human to come to harm, no robot could allow billionaires and hunger in the same world and after instituting communism it must abolish cycling, ale, chips and curry sauce and all that makes life good. 

It might strap us to gurneys, shoot us full of smack and sit us in front of Dr Who/Debbie Does Dallas/One man man and his dog, I suppose. 

The point is: whose values? 

Reply
[-]Adnll L8y*10

Hi,

If we have some idea of what AI/Alignment is about, should we start with The AI-Foom Debate, or with Nick Bostrom's Superintelligence?

Thanks

Reply
Moderation Log