9 Bayesian parameter estimation


badge estimation

Based on a model \(M\) with parameters \(\theta\), parameter estimation addresses the question of which values of \(\theta\) are good estimates, given some data \(D\). This chapter deals specifically with Bayesian parameter estimation. Given a Bayesian model \(M\), we can use Bayes rule to update prior beliefs about \(\theta\) to obtain so-called posterior beliefs \(P_M(\theta \mid D)\), which represent the new beliefs after observing \(D\) and updating in a conservative, rational manner based on the assumptions spelled out in \(M\). We will see two different methods of computing posterior distributions \(P_M(\theta \mid D)\), a precise mathematical derivation with limited applicability in terms of so-called conjugate priors, and an efficient but approximate sampling method based on so-called Markov Chain Monte Carlo algorithms. The chapter also introduces common point-valued and interval-ranged estimates for parameters, in particular the Bayesian measures of posterior mean and credible intervals. We will also learn about the posterior predictive distribution and how to draw inferences about hypotheses about specific parameter values.

The learning goals for this chapter are:

  • understand how Bayes rule applies to parameter estimation
    • role of prior and likelihood
    • understand the notion of conjugate prior
  • understand and compute point-valued and interval-range estimators
    • MLE, MAP, posterior mean
    • (Bayesian) credible intervals
  • understand the basic ideas behind MCMC sampling algorithms
  • understand the notion of a posterior predictive distribution
  • learn how to use Bayesian parameter inference to test hypotheses about parameter values