Relative likelihoods express how relatively more likely an observation is, comparing one hypothesis to another. For example, if we're investigating the murder of Mr. Boddy and the suspects are Miss Scarlet and Colonel Mustard, and Mr. Boddy was poisoned, we might think that Miss Scarlet is twice as likely to use poison as Colonel Mustard — for relative likelihoods of (2 : 1). This could be true if their respective probabilities of using poison were 20% versus 10%, or if the probabilities 4% versus 2%. What matters is the relative likelihoods, not the absolute magnitudes.
Relative likelihoods summarize the strength of the evidence represented by the observation that Mr. Boddy was poisoned — under Bayes' rule, the evidence points to Miss Scarlet to the same degree whether the absolute probabilities are 20% vs. 10%, or 4% vs. 2%.
Relative likelihoods may be given between many different hypotheses at once. If the evidence is that Mr. Boddy was poisoned, it might be the case that Miss Scarlet, Colonel Mustard, and Professor Plum have the respective probabilities 20%, 10%, and 1% of using poison any time they commit a murder. In this case, we have three hypotheses — "Scarlet did it", "Mustard did it", and "Plum did it" — with relative likelihoods of (20 : 10 : 1) between them.
Any two terms in a list of relative likelihoods can be used to generate a likelihood_ratio between two hypotheses. For example, the likelihood ratio of "Scarlet did it" to "Mustard did it" is 2/1, and the likelihood ratio of "Scarlet did it" to "Plum did it" is 20/1. This means that the evidence supports the "Scarlet" hypothesis 2x more than it supports the "Mustard" hypothesis, and 20x more than it supports the "Plum" hypothesis.
When reasoning about different hypotheses using a , the likelihood of evidence given hypothesis is often written using the conditional probability