AVoropaev

Wiki Contributions

Comments

The case for hypocrisy

But that's a fix to a global problem that you won't fix anyway. What you can do is allocate some resources to fixing a lesser problem "this guy had nothing to eat today".

It seems to me that your argument proves too much -- when faced with a problem that you can fix you can always say "it is a part of a bigger problem that I can't fix" and do nothing.

The case for hypocrisy

What do you mean by 'real fix' here? What if said that real-real fix requires changing human nature and materialization of food and other goods out of nowhere? That might be more effective fix, but it is unlikely to happen in near future and it is unclear how you can make it happen. Donating money now might be less effective, but it is somehow that you can actually do.

In Defence of Spock

Detailed categorizations of mental phenomena sounds useful. Is there a way for me to learn that without reading religious texts?

Julia Galef and Matt Yglesias on bioethics and "ethics expertise"

How can you check proof of any interesting statement about real world using only math? The best you can do is check for mathematical mistakes.

Extracting Money from Causal Decision Theorists

I assume you mean that I assume P(money in Bi | buyer chooses Bi )=0.25? Yes, I assume this, although really I assume that the seller's prediction is accurate with probability 0.75 and that she fills the boxes according to the specified procedure. From this, it then follows that P(money in Bi | buyer chooses Bi )=0.25.

Yes, you are right. Sorry.

Why would it be a logical contradiction? Do you think Newcomb's problem also requires a logical contradiction?

Okay, it probably isn't a contradiction, because the situation "Buyer writes his decision and it is common knowledge that an hour later Seller sneaks a peek into this decision (with probability 0.75) or into a random false decision (0.25). After that Seller places money according to the decision he saw." seems similar enough and can probably be formalized into a model of this situation.

You might wonder why am I spouting a bunch of wrong things in an unsuccessful attempt to attack your paper. I do that because it looks really suspicious to me for the following reasons:

  1. You don't use language developed by logicians to avoid mistakes and paradoxes in similar situations.
  2. Even for something written in more or less basic English, your paper doesn't seem to be rigorous enough for the kinds of problems it tries to tackle. For example, you don't specify what exactly is considered common knowledge, and that can probably be really important.
  3. You result looks similar to something you will try to prove as a stepping stone to proving that this whole situation with boxes is impossible. "It follows that in this situation two perfectly rational agents with the same information would make different deterministic decisions. Thus we arrived at contradiction and this situation is impossible." In your paper agents are rational in a different ways (I think), but it still looks similar enough for me to become suspicious.

So, while my previous attempts at finding error in your paper failed pathetically, I'm still suspicious, so I'll give it another shot.

When you argue that Buyer should buy one of the boxes, you assume that Buyer knows the probabilities that Seller assigned to Buyer's actions. Are those probabilities also a part of common knowledge? How is that possible? If you try to do the same in Newcomb's problem, you will get something like "Omniscient predictor predicts that player will pick the box A (with probability 1); player knows about that; player is free to pick between A and both boxes", which seem to be a paradox.

Extracting Money from Causal Decision Theorists

I've skimmed over the beginning of your paper, and I think there might be several problems with it.
 

  1. I don't see where it is explicitly stated, but I think information "seller's prediction is accurate with probability 0,75" is supposed to be common knowledge. Is it even possible for a non-trivial probabilistic prediction to be a common knowledge? Like, not as in some real-life situation, but as in this condition not being logical contradiction? I am not a specialist on this subject, but it looks like a logical contradiction. And you can prove absolutely anything if your premise contains contradiction.
  2. A minor nitpick compared to the previous one, but you don't specify what you mean by "prediction is accurate with probability 0.75". What kinds of mistakes does seller make? For example, if buyer is going to buy the , then with probability 0.75 the prediction will be "". What about the 0.25? Will it be 0.125 for "none" and 0.125 for ""? Will it be 0.25 for "none" and 0 for ""? (And does buyer knows about that? What about seller knowing about buyer knowing...)

    When you write "$1−P (money in Bi | buyer chooses Bi ) · $3 = $1 − 0.25 · $3 = $0.25.", you assume that P(money in Bi | buyer chooses Bi )=0.75. That is, if buyer chooses the first box, seller can't possibly think that buyer will choose none of the boxes. And the same for the case of buyer choosing the second box. You can easily fix it by writing "$1−P (money in Bi | buyer chooses Bi ) · $3 >= $1 − 0.25 · $3 = $0.25" instead. It is possible that you make some other implicit assumptions about mistakes that seller can make, so you might want to check it.

     
What's the big deal about Bayes' Theorem?

I've skimmed over A Technical Explanation of Technical Explanation (you can make links end do over stuff by selecting the text you want to edit (as if you want to copy it); if your browser is compatible, toolbar should appear). I think that's the first time in my life when I've found out that I need to know more math to understand non-mathematical text. The text is not about Bayes' Theorem, but it is about application of probability theory to reasoning, which is relevant to my question. As far as I understand, Yudkowski writes about the same algorithm that Vladimir_Nesov describes in his answer to my question. Some nice properties of the algorithm are proved, but not very rigorously. I don't know how to fix it, which is not very surprising, since I know very little about statistics. In fact, I am now half-convinced to take a course or something like that. Thank you for that.

As for the other part of your answer, it actually makes me even more confused. You are saying "using Bayes in life is more about understanding just how much priors matter than about actually crunching the numbers". To me it sounds similar to "using steel in life is more about understanding just how much whole can be greater than the sum of its parts than about actually making things from some metal". I mean, there is nothing inherently wrong with using a concept as a metaphor and/or inspiration. But it can sometimes cause miscommunication. And I am under impression that some people here (not only me) talk about Bayes' Theorem in a very literal sense.

What's the big deal about Bayes' Theorem?

That's interesting. I've heard about probabilistic modal logics, but didn't know that not only logics are working towards statisticians, but also vice versa. Is there some book or videocourse accessible to mathematical undergraduates?

What's the big deal about Bayes' Theorem?

This formula is not Bayes' Theorem, but it is a similar simple formula from probability theory, so I'm still interested in how you can use it in daily life.

Writing P(x|D) implies that x and D are the same kind of object (data about some physical process?) and there are probably a lot of subtle problems in defining hypothesis as a "set of things that happen if it is true" (especially if you want to have hypotheses that involve probabilities). 

Use of this formula allows you to update probabilities you prescribe to hypotheses, but it is not obvious that update will make them better. I mean, you obviously don't know real P(x)/P(y), so you'll input incorrect value and get incorrect answer. But it will sometimes be less incorrect. If this algorithm has some nice properties like "sequence of P(x)/P(y) you get repeating your experiment converges to the real P(x)/P(y) provided x and y are falsifiable by your experiment (or something like that)", then by using this algorithm you'll with high probability eventually update your algorithm. It would be nice to understand, for what kinds of x, y and D you should be at least 90% sure that your P(x)/P(y) will be more correct after a million of experiments.

I'm not implying that this algorithm doesn't work. More like it seems that proving that it works is beyond me. Mostly because statistics is one of the more glaring holes in my mathematical education. I hope that somebody has proved that it works at least in the cases you are likely to encounter in your daily life. Maybe it is even a well-known result.

Speaking of the daily life, can you tell me how people (and you specifically) actually apply this algorithm? How do you decide, in which situation it is worth to use it? How do you choose initial values of P(x) (e.g. it is hard for me to translate "x is probably true" into "I am 73% sure that x is true"). Are there some other important questions I should be asking about it?