Sorted by New

# Wiki Contributions

After reading this post, I have some questions and I asked the author directly for discussions.

Here's the questions and replies I got from Bucky! Hope it'd be useful to some of you.

***

## Q1.

In the section of Working through the odds, how is the priori of the contestant's judge (0% for C and D) affecting the posteriori (28% 48% 24% 0% from the audience)?

And, how did it change from 6:1 (66% 11% 11% 11%, 2/3 given by Kelly's criterion) to only a 10% edge?

## A1.

6:1 is the amount of evidence I required in order to be justified in guessing. This is calculated from my prior odds for each answer (p=0.25) requiring to move to p=0.67 for an individual answer.

The p=0.67 is calculated from Kelly - it creates enough of an edge to make the bet worthwhile.

The 10% doesn't refer to an edge in the Kelly criterion sense. Because the contestant had said that she was confident that both C and D were incorrect it seems likely that any audience member who didn't know the answer would say either A or B. If C or D got a high proportion of the vote share then that is strong evidence that those people are really confident in their answer. Of course some people who don't know still might say C or D, 10% was my limit of what fraction of the audience I thought might do this.

As the actual fraction was 24%, this gave me some evidence in favor of C. I don't need to calculate exactly how much evidence I think this gives me, only whether it is better evidence than 6:1. This is entirely subjective and depends on how well my amateur psychology works but I felt it was better than 6:1.

It's important to say that the audience's answer percentages aren't directly involved in the Bayesian update. The evidence in favor of C is

how likely the voting result would be given that C is true
versus
how likely the voting result would be given that C is false

This depends on the assumptions that you make about the audience's voting patterns.

## Q2.

Considering purely theoretical assumptions i.e., excluding real world variables such as:

-whether the audience/contestant knows the answers or not

-the difficulty of the question

-should you wage it or quit immediately

Under such assumptions, then:

A: 50-50, then Ask the Audience

B: Ask the Audience, then 50-50

A or B, which order will be the best strategy?

Or, there will be no single "best" strategy solely based on Bayesian inference? (by always using lifeline in order A or B, you could be cutting out more "branches" in the total possible outcome than another one.)

A2.

My analysis depended on the assumptions that I make:

a) There's a fairly binary split between people who know and people who are just guessing

b) A small fraction of the audience actually know the answer for sure

c) The effect of salience is comparable to the fraction of people who know the correct answer (the contestant doesn't know which one is bigger)

d) You are confident that you are definitely going to use both lifelines on that question

If those things are all true then I'd be confident that B is the better option. I think "b" and "c" are fairly likely for end-game questions. If "a" isn't true (e.g. some people know that one answer is definitely wrong but aren't sure about the others) then I suspect that for most cases B is still the better option.

"d" is more complicated. If it isn't true then order A or B would partially depend on how likely you expect to give an answer after just one lifeline and how valuable you expect each lifeline to be later.

So I would consider there to be a best strategy for a given scenario and that generally order B should be favored more than it is intuitively. In Bayesian inference you always have to start with a prior and if that prior expects that all 4 assumptions are true then B is best. If assumption "d" is not true then this requires recalculation. If you were wanting to make this more general I guess you could change assumption "a" to different distributions of knowledge and see how it works out.

## Q3.

Can our model be seen as a variant of Monty Hall Problem?

## A3.

I can't see a way to make a Monty Hall analogy work.

In Monty Hall the point is that the host knows the correct answer and by giving constrained information about one which is incorrect he gives some extra evidence about which is correct.

If before 50:50 there was one randomly selected answer which the host declared would stay (whether it was right or wrong) then we'd be closer to a Monty Hall situation.