Sorted by New

Wiki Contributions



woops, yes that was rather stupid of me. Should be fixed now, my most preferred is me backstabbing Clippy, my least preferred is him backstabbing me. In the middle I prefer cooperation to defection. That doesn't change my point that since we both have that preference list (with the asymmetrical ones reversed) then it's impossible to get either asymmetrical option and hence (C,C) and (D,D) are the only options remaining. Hence you should co-operate if you are faced with a truly rational opponent.

I'm not sure whether this holds if your opponent is very rational, but not completely. Or if that notion actually makes sense.


Sorry for being a pain, but I didn't understand exactly what you said. If you're still an active user, could you clear up a few things for me? Firstly, could you elaborate on counterfactual definiteness? Another user said contrafactual, is this the same, and what do other interpretations say on this issue?

Secondly, I'm not sure what you meant by the whole universe being ruled by hidden variables, I'm currently interpreting that as the universe coming pre-loaded with random numbers to use and therefore being fully determined by that list along with the current probabilistic laws. Is that what you meant? If not, could you expand a little on that for me, it would help my understanding. Again, this is quite a long time post-event so if anyone reading this could respond that would be helpful.


In reality, not very surprised. I'd probably be annoyed/infuriated depending on whether the actual stakes are measured in billions of human lives.

Nevertheless, that merely represents the fact that I am not 100% certain about my reasoning. I do still maintain that rationality in this context definitely implies trying to maximise utility (even if you don't literally define rationality this way, any version of rationality that doesn't try to maximise when actually given a payoff matrix is not worthy of the term) and so we should expect that Clippy faces a similar decision to us, but simply favours the paperclips over human lives. If we translate from lives and clips to actual utility, we get the normal prisoner's dilemma matrix - we don't need to make any assumptions about Clippy.

In short, I feel that the requirement that both agents are rational is sufficient to rule out the asymmetrical options as possible, and clearly sufficient to show (C,C) > (D,D). I get the feeling this is where we're disagreeing and that you think we need to make additional assumptions about Clippy to assure the former.


I understood that Clippy is a rational agent, just one with a different utility function. The payoff matrix as described is the classic Prisoner's dilemma where one billion lives is one human utilon and one paperclip on Clippy utilon; since we're both trying to maximise utilons, and we're supposedly both good at this we should settle for (C,C) over (D,D).

Another way of viewing this would be that my preferences run thus: (D,C);(C,C);(D,D);(C,D) and Clippy run like this: (C,D);(C,C);(D,D);(D,C). This should make it clear that no matter what assumptions we make about Clippy, it is universally better to co-operate than defect. The two asymmetrical outputs can be eliminated on the grounds of being impossible if we're both rational, and then defecting no longer makes any sense.


7 years late, but you're missing the fact that (C,C) is universally better than (D,D). Thus whatever logic is being used must have a flaw somewhere because it works out worse for everyone - a reasoning process that successfully gets both parties to cooperate is a WIN. (However, in this setup it is the case that actually winning would be either (C,D) or (C,D), both of which are presumably impossible if we're equally rational).


Are you sure that's right chronologically? Just because in the UK we use dd/mm/yy and we say "Fourteenth of March, twenty-fifteen".

Japan apparently uses yy/mm/dd which makes even more sense, but I have no idea how they pronounce their dates. Point being, I'm not sure which order things actually evolved in.


This would to some extent letting Harry keep his wand- he wants to have some fun after all, and Harry should be given a very limited chance to win. Not much, maybe strip him naked, surround him by armed killers and point a gun at his head, whilst giving him only a minute to think. But leave him his wand, and do give him the full 60 seconds, don't just kill him if he looks like he's stalling.


Well, seeing as he was almost prophesied to fail, it was sensible to make sure Harry would have someone to stop him in the future. And as it turns out, this was a very good idea.


It's actually the same tactic as the Weasley twins used to cover the "engaged to Ginever Weasley" story- plant so many make newspaper reports that everyone gets confused. And it kinda happens again after the Hermione/Draco incident. Guess Eliezer like the theme of people not being able to discern the truth from wild rumours if the truth's weird enough.


So... what we should do now is to work out all the things Quirrell should have before this. He couldn't predict partial transfiguration, true. But he knew that Harry had a power he knew not, and had a long time to plan for contingencies.

Personally, I think he should have had the death eaters disillusioned, surround Harry but from a distance, cast holograms to confuse him and then use ventriliquo charms. At the very least disillusionment should be as much of a general tactic as a massed finite and the death eaters could have been hidden.

The massively more obvious solution is just to kill Harry quickly, and moreover to not EVER offer the protagonist 60 seconds to try to save himself, no matter how interesting that sounds.

Any other general/specific tactics that LV could and should have thought of in advance? He had an entire year to plan this, and has Harry level intelligence. He should have predicted and outplayed.

Load More