[12:49:29 AM] Conversational Partner: actually, even if the praise is honest, it makes me uncomfortable if it seems excessive. that is, repeated too often, or made a big deal about.

[12:49:58 AM] Adelene Dawner: 'Seems excessive' can actually be a cue for 'is insincere'.

[12:50:05 AM] Conversational Partner: oh

[12:50:25 AM] Adelene Dawner: That kind of praise tends to parse to me as someone trying to push my buttons.

[12:51:53 AM | Edited 12:52:09 AM] Conversational Partner: is it at least theoretically possible that the praise is honest, and the other person just happens to think that the thing is more praiseworthy than I do? or if the other person has a different opinion than I do about how much praise is appropriate in general?

[12:52:59 AM] Adelene Dawner: Of course.

[12:53:13 AM] Adelene Dawner: This is a situation where looking at Bayes' theorem is useful.

[12:54:03 AM] Conversational Partner: ooh, that might be the first situation I've ever encountered where applying Bayes' theorem directly is useful... or did you not mean applying it directly?

[12:55:18 AM] Adelene Dawner: You might not be able to apply it directly in terms of plugging in numbers and getting a result, but you can use it in terms of 'these things make the probability that this signal is strong evidence higher, and these things make it lower', which then makes the whole situation make more sense.

[12:55:50 AM] Conversational Partner: oh, thanks for explaining that

[12:55:56 AM] Conversational Partner: also, what were you saying before about "costly signal"?

[12:56:16 AM] Adelene Dawner: That actually ties into the Bayes' theorem thing. I'll expand.

[12:56:52 AM] Adelene Dawner: The relevant thing here is the probability that you did something right given that you were praised for it by a particular person or in a particular way, right?

[12:57:38 AM] Conversational Partner: yes

[12:57:44 AM] Conversational Partner: or at least, I think so

[12:59:02 AM] Adelene Dawner: Okay. So A is 'I did the thing right' and B is 'I was praised for A in a particular way'. Bayes' theorem says that P(A given B) is P(B given A) times P(A) over P(B). P(B given A) and P(A) make it go up, P(B) makes it go down.

[1:02:29 AM] Adelene Dawner: So, if I praise you in a particular way every time you do a particular thing right, without fail, P(B given A) is 1 and the chance that you did the thing right given that I praised you goes up. If you always do the thing right, P(A) is 1 and the chance that you did it right given that I praised you goes up. If I always praise you no matter whether you've done the thing right or not, P(B) is 1 and the chance that you did it right given that I praised you goes down.

[1:04:19 AM] Adelene Dawner: (If I always praise you whether you've done the thing right or not, then P(B given A) and P(B) are both 1, and P(A given B) is the same as P(A).)

[1:04:42 AM] Adelene Dawner: You follow so far?

[1:04:57 AM] Conversational Partner: I think so, yes

[1:14:18 AM] Adelene Dawner: Say I have a project where every week on Monday I give you a piece of spec, and on Friday you give me code and I test it to see if it meets that spec. Also on Friday, I roll a 10-sided die. If the code seems to meet the spec, I give you $100 on Saturday. Also, if the die comes up 10, I give you $100 on Saturday. (If the code meets spec and the die comes up 10, I give you $200.) I don't tell you whether the code met spec, though. Also say that you've been testing it yourself, and it seems to you that your code meets spec 20% of the time, and I'm 90% accurate at judging whether code meets spec.

A is 'code meets spec', B is 'you get $100'.

[1:19:05 AM] Adelene Dawner: So, P(A given B) is P(B given A) times P(A) over P(B) - in this case, .9 times .2 over (.1 (dice came up 10) plus .2 (code was right)), or .11 over .3, or .366 [Ed note: I know it wouldn't be exactly .1 + .2]

[1:23:19 AM] Adelene Dawner: Now, say that I'm having financial trouble, and can't afford to give you $100 for no reason every time a d10 comes up 10. So I switch it for a d20, and give you $100 if the d20 comes up 20. This is the 'costly signaling' scenario, since giving you $100 for the correct code costs me more, too. In this case, the math is .9 times .2 over (.05 plus .2), or .11 over .25, or .44. In this case, if you get $100 on Friday, there's a much higher chance that it's actually because you got the code right.

[1:24:48 AM] Conversational Partner: yay for Bayes Theorem :) yay for being smart enough to know how to use it :)

[1:24:57 AM] Adelene Dawner: ^^

[1:25:19 AM] Adelene Dawner: The trick is, you can do that without pluging numbers in at all.

[1:25:35 AM] Conversational Partner: yes, you explained how.

[1:25:37 AM] Conversational Partner: thanks

[1:25:38 AM] Adelene Dawner: ^^

[1:25:42 AM] Conversational Partner: :)

[1:26:46 AM] Adelene Dawner: So the problem with someone giving you a lot of praise is that it raises P(B), which lowers P(A given B) - it means that any instance of praise is less meaningful.

[1:28:39 AM] Conversational Partner: sorry, but I still don't see where the "cost" comes in for just verbal praise

[1:29:53 AM] Adelene Dawner: Being seen to praise someone can have social costs. Also paying enough attention to give praise at all is costly, though that affects P(B given A) more than P(B).

[1:31:14 AM] Conversational Partner: oh. so you weren't implying that praising selectively is more costly than praising unconditionally?

[1:33:14 AM] Adelene Dawner: It can be, but again that all shows up in P(B given A) and P(B). False negatives lower P(B given A) and false positives raise P(B).

[1:39:15 AM] Conversational Partner: checking if there are any beliefs I need to update as a result of all this... no, I don't think there are any false beliefs that need to be corrected, but this information is likely to be helpful next time I'm in a situation where I'm trying to decide the appropriate response to praise. it should be more trustworthy, and more satisfying, when P(A given B) is high. thanks :)

[1:39:35 AM] Adelene Dawner: Yep! ^^

This conversation piqued my interest because one of my expectations was subverted. Due to habits ingrained in me while applying Bayes' theorem in more formal statistical inference, I almost never give consideration to P(B) in these informal applications; I find it much easier to think about the likelihood ratio, P(B given A) / P(B given not-A).

Given P(A), the likelihood ratio determines P(B) and vice versa, so it's not like there's any difference in practice. But I find it interesting to observe a different habit of mind in action.

It is interesting. I (informally) use P(A)*P(B|A) / [ P(B|A)+P(B|notA) ].