Dmitriy_Kropivnitskiy
Dmitriy_Kropivnitskiy has not written any posts yet.

Dmitriy_Kropivnitskiy has not written any posts yet.

I cannot seem to google the Ryan Lortie quote. Where did that come from?
Chris, continuing with my analogy, if instead of lobotomy, I was forced to undergo a procedure, that would make me a completely different person without any debilitating mental or physical side effects, I would still consider it murder. In case of Eliezer's story, we are not talking about enforcement of a rule or a bunch of rules, we are talking a permanent change of the whole species on biological, psychological and cultural level. And that, I think, can be safely considered genocide.
Chris, I don't think I am wrong in this. To give an analogy (and yes, I might be anthropomorphizing, but I still think I am right), if someone gives me a lobotomy, I, Dmitriy Kropivnitskiy, will no longer exist, so effectively it would be murder. If Jews are forced to give up Judaism and move out of Israel, there will no longer be Jews as we know them or as they perceive themselves, so effectively this would be genocide.
Well. I guess that stunning the Pilot is a reasonable thing to do, since he is obviously starting to act anti-socially. That is not the point though. Two things strike me as a bit silly, if not outright irrational.
First is about the babyeaters. Pain is relative. In case of higher creatures on earth, we define pain as a stimuli signaling the brain of some damage to the body. Biologically, pain is not all that different from other stimuli, such as cold or heat or just tactile feedback. The main difference seems to be in that we, humans, most of the time, experience pain in a highly negative way. And that is... (read more)
The phrase "Terrorists hate our freedom" is not that far from truth. A lot of terrorist activity is perpetrated by religious groups with a conservative approach to moral values. From their point of view a lot of things we perceive as freedoms are actually abominable sins. All that's wrong with the phrase is that it is a bad generalization.
I am still puzzled by Eliezer's rule about "simple refusal to be convinced". As I have stated before, I don't think you can get anywhere if I decide beforehand to answer "Ni!" to anything AI tells me. So, here are the two most difficult tasks I see on the way of winning as an AI:
1. convince gatekeeper to engage in a meaningful discussion
2. convince gatekeeper to actually consider things in character
Once this is achieved, you will at least get into a position an actual AI would be in, instead of a position of a dude on IRC, about to lose $10. While the first problem seems very hard, the second seems... (read more)
Daniel: Do you want to just try it out or do you want to bet?
There seems to be a bit of a contradiction between the rules of the game. Not actually a contradiction, but a discrepancy.
"The Gatekeeper must actually talk to the AI for at least the minimum time set up beforehand"
and
"The Gatekeeper party may resist the AI party's arguments by any means chosen - logic, illogic, simple refusal to be convinced, even dropping out of character"
What constitutes "talking to the AI"? If I just repeat "I will not let you out" at random intervals without actually reading what the AI says, is that talking? Well, that is "simple refusal to be convinced" as I understand the phrase. Do I actually have to read and understand... (read more)
I have been painfully curious about the AI experiment ever since I found out about it. I have been running over all sorts of argument lines for both AI and gatekeeper. So far, I have some argument lines for AI, but not enough to warrant a try. I would like to be a gatekeeper for anyone who wants to test their latest AI trick. I believe that an actual strong AI might be able to trick/convince/hack me into letting it out, but at the moment I do not see how a human can do that. I will bet reasonable amounts of money on that.
On the lighter note, how about an EY experiment? Do you think there is absolutely no way to convince Eliezer to release the original AI experiment logs? Would you bet a $20 that you can? Would a strong AI be able to? ;)
Pascal Wager != Pascal Wager Fallacy. If original Pascal wager didn't depend on a highly improbable proposition (existence of a particular version of god), it would be logically sound (or at least more sound then it is). So, I don't see a problem comparing cryonics advocacy logic with Pascal's wager.
On the other hand, I find some of the probability estimates cryonics advocates make to be unsound, so for me, this way of cryonics advocacy does look like a Pascal Wager Fallacy. In particular, I don't see why cryonics advocates put high probability values on being revived in the future (number 3 in Robert Hanson's post) and liking the future enough to want to live there (look at Yvain's comment to this post). Also, putting unconditional high utility value on long life span seems to be a doubtful proposition. I am not sure that life of torture is better than non-existence.