A likelihood ratio expresses how relatively more likely an observation is, comparing one hypothesis to another. For example, if we're investigating the murder of Mr. Boddy and the suspects are Colonel Mustard and Miss Scarlet, and Mr. Boddy was poisoned, we might think that Miss Scarlet is twice as likely to use poison as Colonel Mustard - a likelihood ratio of 2:1. This could be true if their respective probabilities of using poison were 20% versus 10%, or if the probabilities 4% versus 2%. What matters is the ratio, not the absolute magnitudes. This ratio summarizes the strength of the evidence represented by the observation that Mr. Boddy was poisoned - under Bayes's Rule, the evidence points to Miss Scarlet to the same degree whether the absolute probabilities are 20% vs. 10%, or 4% vs. 2%
%%knows-requisite(math2):
The likelihood function from evidence to various hypotheses is denoted It's important to note that this function, ranging over is not a probability function and its terms need not sum to 1. If the observation is that Mr. Boddy was poisoned, it might be the case that Colonel Mustard, Miss Scarlet, and Professor Plum have the respective probabilities 20%, 10%, and 1% of using poison any time they commit a murder. These probabilities don't sum to 1, and there's no reason for them to do so.
Or for an example using a continuous function, consider a possibly biased coin whose propensity of showing heads might be anywhere between and . Suppose we observe the coin to come up heads, and then tails, the sequence HT. Ranging over possible propensities , we will have the likelihood function:
Again, there is no reason to normalize this function so that it sums to 1 - it's not a probability distribution over the random variable f, but a function expressing each possible f's propensity to yield the observed evidence HT. %%