A common question in statistical modeling is ``which out of a continuum of models are likely to have generated this data?'' For the Gaussian class of models, this question can be answered completely and exactly. This paper derives the exact posterior distribution over the mean and variance of the generating distribution, i.e. p(m, V | X), as well as the marginals p(m | X) and p(V | X). It also derives p(X | Gaussian), the probability that the data came from any Gaussian whatsoever. From this we can get the posterior predictive density p(x | X), which has the most practical importance. The analysis is done for noninformative priors and for arbitrary conjugate priors. The presentation borrows from MacKay (1995). The paper concludes with a simulated classification experiment demonstrating the advantage of the Bayesian method over maximum-likelihood and unbiased estimation.