Bayes’ Theorem is typically stated as:

\[ P(A \mid E) = \frac{P(E \mid A) \, P(A)}{P(E \mid A) \, P(A) + P(E \mid \neg A) \, P(\neg A)} \]

This form explicitly shows all the pieces of Bayesian reasoning:

  • The posterior probability of the hypothesis \(A\), given evidence \(E\): \(P(A \mid E)\).

  • The probability prior to the evidence: \(P(A)\).

  • The likelihood observing the evidence if the hypothesis is true: \(P(E \mid A)\).

  • The likelihood observing the evidence if the hypothesis is not true. Equivalently, the likelihood of observing the evidence if the alternative hypothesis is true: \(P(E \mid \neg A)\).

However, the above form is not terribly convenient when making calculations over a series of events:

\[ P(A \mid E_1, \dots, E_n) = \frac{P(E_1, \dots, E_n \mid A)\,P(A)}{P(E_1, \dots, E_n \mid A) \, P(A) + P(E_1, \dots, E_n \mid \neg A) \, P(\neg A)} \]

While we can calculate \(P(E_1, \dots, E_n)\) via successive substitutions, there is a more convenient approach. This involves transforming Bayes’ Theorem as follows:

We start with the probability of the hypothesis (show above), and the probability of the complement, alternate hypothesis

\[ P( \neg A \mid E_1, \dots, E_n) = \frac{P(E_1, \dots, E_n \mid \neg A)\,P( \neg A)}{P(E_1, \dots, E_n \mid \neg A) \, P(\neg A) + P(E_1, \dots, E_n \mid A) \, P(A)} \]

Assuming the pieces of evidence are conditionally independent given \(A\) (and similarly given \(\neg A\)), we can factorize the likelihood terms:

\[ P(E_1, \dots, E_n \mid A) = \prod_{i=1}^{n} P(E_i \mid A) \]

and

\[ P(E_1, \dots, E_n \mid \neg A) = \prod_{i=1}^{n} P(E_i \mid \neg A). \]

Substitute these into the posterior odds:

\[ \frac{P(A \mid E_1, \dots, E_n)}{P(\neg A \mid E_1, \dots, E_n)} = \frac{P(A)}{P(\neg A)} \prod_{i=1}^{n} \frac{P(E_i \mid A)}{P(E_i \mid \neg A)}. \]

Define the prior odds, as:

\[ O(A) = \frac{P(A)}{P(\neg A)} \]

and the likelihood ratio for each piece of evidence \(E_i\) as:

\[ \text{LR}_i = \frac{P(E_i \mid A)}{P(E_i \mid \neg A)}. \]

Then, we write the posterior odds and the posterior probability, our desired result, compactly as:

\[ \begin{align} O(A \mid E_1, \dots, E_n) &= O(A) \prod_{i=1}^{n} \text{LR}_i, \\ P(A \mid E_1, \dots, E_n) &= \frac{O(A \mid E_1, \dots, E_n)}{1 + O(A \mid E_1, \dots, E_n)}. \end{align} \]

Thus, we have a two-step process to compute the posterior probability:

  1. Calculate the posterior odds as the prior odds times the product of the likelihood ratios.

  2. Convert the odds to probabilities.