17.1 Frequentist and Bayesian statistical models
Section 8.1 introduced the notion of a Bayesian statistical model as a pair consisting of a likelihood function
\[ P_M(D_\text{DV} \mid D_\text{IV}, \theta) \]
and a prior over model parameters
\[ P_M(\theta)\,.\]
Normally, the frequentist approach is not model-centric, but rather describes its methods as an arsenal of situation-specific tests. The explicit model-centric explanation of a selection of frequentist tests given in the previous chapter showed that the frequentist models that underlie the computation of \(p\)-values eradicate all free model parameters by assigning them a single value in one of two ways:
- fixing a parameter to the value dictated by the relevant null-hypothesis; or
- estimating the value of a parameter directly from the data (e.g., the standard deviation in a \(t\)-test).
Beyond \(p\)-values and significance testing, we may say that a frequentist model consists only of a likelihood, assuming -as it were-, but never actually using, a flat prior over any remaining free model parameters.
The upshot of this is that, conceptual quibbles about the nature of probability notwithstanding, from a technical point of view frequentist models can be thought of as just special cases of Bayesian models (with parameters either fixed to a single value somehow, or assuming flat priors). Seeing this subsumption relation is insightful because it implies that frequentist concepts like \(p\)-value, \(\alpha\)-error or statistical power all directly import into the Bayesian domain (whether they are equally important and useful or not). We will visit, for instance, the notion of a Bayesian \(p\)-value later in this chapter.