A lot of people don't know who I am. If you are Sam-I-Am, then the first thing that you should know is that I have tried green eggs and ham.

If you are Uncle-Sam-I-Am, then the first thing that you should know is that I think that our government is more than a scam. I genuinely do believe not only in civic duty but also in civic virtue.

If you are Samus-Aran-I-Am, then the first thing that you should know is that Super Metroid (SNES) was my jam. I bought it on the day that it was released, back in those days when we still eagerly awaited the arrival of the next issue of Nintendo Power magazine via snail mail. And if you are Nintendo Power magazine, let me point out that there was a serious flaw in your competition for speed-running Super Metroid: telling people to collect 100% of the items and then submit photos of the TV screen at the end the game allowed people to submit deceptive photos because the game displays your completion time on a different screen than the screen that displays the percentage of the items that you collected. The person who supposedly completed that speed run THREE TIMES as fast as I did NOT collect 100% of the items! I still believe that I was the best there was at that game in 1994, and the rest of my life has been a quest to understand how we can create systems that can't so easily be gamed with a bit of deception.

My Ph.D. is in computer science. My interests do not exclude constructive logic, information security, epistemology, the philosophy of language, and sometimes even artificial intelligence.

"Vg'f abg zl fglyr ng nyy, ohg gung jnf jung V jnf nvzvat sbe: Vs gurl guvax lbh'er pehqr, tb grpuavpny, vs gurl guvax lbh'er grpuavpny, tb pehqr. V'z n irel grpuavpny obl. Fb V qrpvqrq gb tb nf pehqr nf cbffvoyr. ... Gurfr qnlf, gubhtu, lbh unir gb or cerggl grpuavpny orsber lbh pna rira nfcver gb pehqrarff." — Wbuaal Zarzbavp

Wiki Contributions


Nothing is fundamentally a black box.

That claim is unjustified and unjustifiable. Everything is fundamentally a black box until proven otherwise. And we will never find any conclusive proof. (I want to tell you to look up Hume's problem of induction and Karl Popper's solution, although I feel that making such a remark would be insulting your intelligence.) Our ability to imagine systems behaving in ways that are 100% predictable and our ability to test systems so as to ensure that they behave predictably does not change the fact that everything is always fundamentally a black box.

Thanks for offering that solution. It seems appropriate to me. I think that the issue at stake is related to the difference in programming language semantics between a probabilistic and nondeterministic semantics. Once you have decided on a nondeterministic semantics, you can't simply start adding in probabilities and expect it to make sense. So, your solution suggests that we should have had grounded the entire problem in a probability distribution, whereas I was saying that, because we hadn't done that, we couldn't legitimately add probabilities into the picture at a later step. I wasn't ruling out the possibility of a solution like yours, and it would indeed be interesting to know whether yours can be generalized in any way. In a prior draft of this post, I actually suggested that we could introduce a random variable before the envelope was chosen (although I hadn't even attempted to work out the details). It was only for the sake of brevity that I omitted suggesting that idea.

My interest is more in the philosophy of language and how language can be deceptive — which is clearly happening in some way in statement of this problem — and what we can do to guard ourselves against that. What bothers me is that, even when I claimed to have spotted where where and how the false step occurred, nobody wanted to believe that I spotted it, or at least they they didn't believe that it mattered. That's rather disturbing to me because this problem involves a relatively simple use of language. And I think that humans are in a bit a trouble if we can't even get on the same page about something this simple... because we've got very serious problems right now in regard to A.I. that are much more complicated and tricky than this to deal with than this one.

But I do like your solution, and I'm glad that it's documented here if nowhere else.

And for anyone who reads this, I apologize if the tone of my post was off-putting. I deliberately chose a slightly provocative title simply to draw attention to this post. I don't mind being corrected if I'm mistaken or have misspoken.

Thank you for responding. This is indeed a very tricky issue, and I was looking for a sounding board... anyone who could challenge me in order to help me to clarify my explanation. I didn't expect so many haters in this forum, but the show must go on with or without them.

My undergraduate degree is in math, and mathematicians sometimes use the phrase "without loss of generality" (WLOG). Every once in a while they will make a semi-apologetic remark about the phrase because they all know that, if it were ever to be used in an inappropriate way, then everything could fall apart. Appealing to WLOG is not a cop-out but rather an attempt to tell those who are evaluating the proof, "Tell me if I'm wrong."

In your example of a coin flip, I can find no loss of generality. However, in the two envelopes problem, I can. If step (1) of the argument had said "unselected envelope" rather than "selected envelope", then the argument would have led the player to choose to keeping the selected argument rather than switching it. Why should the argument using the words "selected envelope" be more persuasive than the argument involving the words "unselected envelope"? Do you see what I mean? There is an implicit "WLOG" but, in this case, with an actual loss in generality.

This problem still leaves me feeling very troubled because, even to the extent that I understand the fallacy, it still seems very difficult for me to know whether I have explained it in a way that leaves absolutely no room for confusion (which is very rare when I see an actual error in somebody's reasoning). And apparently, I was not able to explain the fallacy in a way that others could understand. As far as I'm concerned, that's a sign of a very dangerous fallacy. And I've encountered some very deep and dangerous fallacies. So, this one is still quite disturbing to me.