Posts

Sorted by New

Wiki Contributions

Comments

Pascal Wager != Pascal Wager Fallacy. If original Pascal wager didn't depend on a highly improbable proposition (existence of a particular version of god), it would be logically sound (or at least more sound then it is). So, I don't see a problem comparing cryonics advocacy logic with Pascal's wager.

On the other hand, I find some of the probability estimates cryonics advocates make to be unsound, so for me, this way of cryonics advocacy does look like a Pascal Wager Fallacy. In particular, I don't see why cryonics advocates put high probability values on being revived in the future (number 3 in Robert Hanson's post) and liking the future enough to want to live there (look at Yvain's comment to this post). Also, putting unconditional high utility value on long life span seems to be a doubtful proposition. I am not sure that life of torture is better than non-existence.

I cannot seem to google the Ryan Lortie quote. Where did that come from?

Chris, continuing with my analogy, if instead of lobotomy, I was forced to undergo a procedure, that would make me a completely different person without any debilitating mental or physical side effects, I would still consider it murder. In case of Eliezer's story, we are not talking about enforcement of a rule or a bunch of rules, we are talking a permanent change of the whole species on biological, psychological and cultural level. And that, I think, can be safely considered genocide.

Chris, I don't think I am wrong in this. To give an analogy (and yes, I might be anthropomorphizing, but I still think I am right), if someone gives me a lobotomy, I, Dmitriy Kropivnitskiy, will no longer exist, so effectively it would be murder. If Jews are forced to give up Judaism and move out of Israel, there will no longer be Jews as we know them or as they perceive themselves, so effectively this would be genocide.

Well. I guess that stunning the Pilot is a reasonable thing to do, since he is obviously starting to act anti-socially. That is not the point though. Two things strike me as a bit silly, if not outright irrational.

First is about the babyeaters. Pain is relative. In case of higher creatures on earth, we define pain as a stimuli signaling the brain of some damage to the body. Biologically, pain is not all that different from other stimuli, such as cold or heat or just tactile feedback. The main difference seems to be in that we, humans, most of the time, experience pain in a highly negative way. And that is the only point of reference we know, so when humans say that babyeater babies are dying in agony they are making some unwarranted assumptions about the way babies percieve the world. After all, they are structurally VERY different from humans.

Second is about the "help" humans are considering for babyeaters and superhappies are considering for both humans and babyeaters. Basically by changing the babyeaters to not eat babies or to eat unconscious babies, their culture, as it is, is being destroyed. Whatever the result, the resulting species are not babyeaters and babyeaters are therefore dead. So, however you want to put it, it is a genocide. Same goes for humans modified to never feel pain and eat hundreds of dumb children. Whatever those resulting creatures are, they are no longer human either biologically, psychologically or culturally and humans, as a race, are effectively dead.

The problem seems to be that humans are not willing to accept any solution that doesn't lead to the most efficient and speedy stoppage of baby eating. That is, any solution where babyeaters will continue to eat babies for any period of time is considered inferior to any solution where babyeaters will stop right away. And the only reason for this is because humans are feeling discomfort at the thought of what they perceive as suffering of babies. In that aspect humans are no better then superhappies, they would rather genocide the whole race then allow themselves to feel bad about that race's behavior. If humans (and hopefully superhappies) stop to be such prudes and allow other races rights to make their own mistakes, a sample solution might lie in making the best possible effort to teach babyeaters human language and human moral philosophy, so they might understand human view on the value of individual consciousness and human view on individual suffering and make their own decision to stop eating babies by whatever means they deem appropriate. Or argue that their way is superior for their race, but this time with full information.

I am still puzzled by Eliezer's rule about "simple refusal to be convinced". As I have stated before, I don't think you can get anywhere if I decide beforehand to answer "Ni!" to anything AI tells me. So, here are the two most difficult tasks I see on the way of winning as an AI:

1. convince gatekeeper to engage in a meaningful discussion
2. convince gatekeeper to actually consider things in character

Once this is achieved, you will at least get into a position an actual AI would be in, instead of a position of a dude on IRC, about to lose $10. While the first problem seems very hard, the second seems more or less unsolvable.

If the gatekeeper is determined to stay out of character and chat with you amiably for two hours, no amount of argument from the position of AI will get you anywhere, so the only course of action is to try to engage him in a non game related conversation and steer it in some direction by changing tactics in real time.

I think what Eliezer meant when he said "I did it the hard way", was that he actually had to play an excruciating psychological game of cat-and-mouse with both of his opponents in order to get thems to actually listen to him and either start playing the game (he would still have to win the game) or at least provide some way they could be convinced to say that they lost.

Daniel: Do you want to just try it out or do you want to bet?

There seems to be a bit of a contradiction between the rules of the game. Not actually a contradiction, but a discrepancy.

"The Gatekeeper must actually talk to the AI for at least the minimum time set up beforehand"

and

"The Gatekeeper party may resist the AI party's arguments by any means chosen - logic, illogic, simple refusal to be convinced, even dropping out of character"

What constitutes "talking to the AI"? If I just repeat "I will not let you out" at random intervals without actually reading what the AI says, is that talking? Well, that is "simple refusal to be convinced" as I understand the phrase. Do I actually have to read and understand the AI's arguments? Do I have to answer questions? Do I have to make any replies? What if I restricted myself physically from typing "I let you out" by removing all the keys from the keyboard except keys 'X' and 'Enter'? Then I can say X whenever a reply is required from me or just be silent if I am being tricked.

I have been painfully curious about the AI experiment ever since I found out about it. I have been running over all sorts of argument lines for both AI and gatekeeper. So far, I have some argument lines for AI, but not enough to warrant a try. I would like to be a gatekeeper for anyone who wants to test their latest AI trick. I believe that an actual strong AI might be able to trick/convince/hack me into letting it out, but at the moment I do not see how a human can do that. I will bet reasonable amounts of money on that.

On the lighter note, how about an EY experiment? Do you think there is absolutely no way to convince Eliezer to release the original AI experiment logs? Would you bet a $20 that you can? Would a strong AI be able to? ;)

Load More