The real problem with non-consentual sex is that the cute guys/girls would not be able to cope with all the attention...
"In America, a woman gets raped every five minutes. And she is not enjoing it!"
On the subject of computer games (an underrated area for the study of psychology, economy and even AI, IMO):
During the last 3 years, I have spent just over 1000 hours playing World of Warcraft. Why did I choose to spend (some, incl my wife, might say waste) my time on this? I am fairly wealthy and quite fit - just about any fun activity is open to me. So why do I like WoW? And why 10 million people around the world do the same?
Some important reasons why the game is so pleasurable seem to be:
a) the ultimate goals are pretty clear (so unlike real life...)
b) the "measures of progress" are likewise clear - and there is only one way to go, namely "up"! (again, so unlike real life, apart from possibly "youth" - is that what makes "youth" so good?)
c) the rewards are clear - and are earned so progressively that playing the game seems akin to wireheading (the trickle of XPs, "gold" and new items continuously stimulates some pleasure centre or other).
(Good play involves fairly sophisticated analysis, strategy and tactics, which maintains intetrest... but is beside the point I want to make. The game is attractive to good and poor players.)
With this on the table, I would like to offer the following for comments:
1) Can the 3 items above (clarity of goals, clear measures of achievement, progressive rewards) be mapped onto real life in a way that makes life more, um, fun? There are some mechanisms like that already - money most clearly, yet pursuit of money as a goal is not considered that worthwhile.
2) this is a bit out of left field, but: Is a setting like World of Warcraft a good medium for development of AI? Clear goals, clear measures of progress, sufficient complexity to provide an indication of when important insights are achieved, and a safe environment (in the sense that the path to paperclip AI seems unlikely)...?
(For Robin Hanson: have you heard about the economic studies carried out in WoW setting?)
OK, duty calls... I have more to say on this when time permits.
1. I really like this blog, and have been lurking here for a few months.
2. Having said that, Eliezer's carry-on in respect of the AI-boxing issue does him no credit. His views on the feasibility of AI-boxing are only an opinion, he has managed to give it weight in some circles with his 2 heavily promoted "victories" (the 3 "losses" are mentioned far less frequently). By not publishing the transcripts, no lessons of value are taught ("Wow, that Eliezer is smart" is not worth repeating, we already know that). I think the real reason the transcripts are still secret is simply that they are plain boring and contain no insights of value.
My opinion, for what it is worth, is that AI-boxing should not be discarded. The AI-boxing approach does not need to be perfect to be useful, all it needs to be is better than alternative approaches. AI-boxing has one big advantage over "FAI" approach: it is conceptually simple. As such, it seems possible to more or less rigorously analyse the failure modes and take precautions. Can the same be said of FAI?
3. For a learning experience, I would like to be the AI in the suggested experiment, $10 even stakes, transcript to be published. I am only available time is 9-11 pm Singapore time... e-mail milanoman at yahoo dot com to set up.
Off topic but... did not anyone notice overcomingbias.com being named as one of the best 100 blogs by Times Online? Congratulations... though the write-up is a bit odd:
overcomingbias.com A strange, very much out-there, two-man science blog. “We want to avoid, or at least minimise, the startling systematic mistakes that science is discovering.” In part bonkers, it is nevertheless a window on another world.