http://nwn.blogs.com/nwn/2010/02/philip-rosedale-ai.html

http://www.lovemachineinc.com/

Should I feel bad for hoping they'll fail? I do not want to see the sort of unFriendly AI would be created after being raised on social interactions with pedophiles, Gorians, and furries. Seriously, those are some of the more prominent of the groups still on Second Life, and an AI that spends its formative period interacting with them (and the first two, especially) could develop a very twisted morality.

New to LessWrong?

New Comment
13 comments, sorted by Click to highlight new comments since: Today at 10:36 AM

I do not want to see the sort of unFriendly AI would be created after being raised on social interactions with pedophiles, Gorians, and furries.

Bad Parenting is not even on the list of reasons you don't get a FAI.

Oh, they'd almost certainly get an unFriendly AI regardless of how they parented it, but bad parenting could very easily make an unFriendly AI worse. Especially if it interacts a lot with the Goreans, and comes to the conclusion that women want to be enslaved, or something similar.

That probably won't make much of a difference, since there's no reason it should care what anyone wants.

If an AI imposed Goreanism on mankind, that would constitute a Friendly AI Critical Failure (specifically, failure 13), not a UFAI.

If it were to FOOM any social norms it absorbed at all would probably make it better not worse. In a kind of "∞ minus 1" way.

Good point, but I'm not entirely sure. Being turned into a Gorian could be worse than being turned into paperclips.

"Being turned into a Gorian" is within the range of enough human desires to make it a significant subculture, although it's repugnant to a much larger section of the culture. I have never heard of anyone with a fantasy of being turned into paperclips. So which is better seems to depend on how you sum utility over all the involved actors.

At a cursory examination, this attempt qualifies as Not Even Wrong; I wouldn't worry about it.

Upvoted, because I hate to see somebody get downvoted for providing information (on-topic and short).

Should I feel bad for hoping they'll fail?

Not at all. I certainly hope so, and this (from their site) makes it sound very likely that they will:

The Brain. Can 10,000 computers become a person?

Downvoting for gratuitous attack on irrelevant subcultures.

Considering the fact that the goal of this project is synonymous with Strong AI, I don't think that 10,000 computers CAN become a person.

[+]ThomasR13y-180