A statistical procedure which endeavors to estimate parameters of an underlying distribution based on the observed distribution. Begin with a ``Prior Distribution'' which may be based on anything, including an assessment of the relative likelihoods of parameters or the results of non-Bayesian observations. In practice, it is common to assume a Uniform Distribution over the appropriate range of values for the Prior Distribution.

Given the Prior Distribution, collect data to obtain the observed distribution. Then calculate the Likelihood of the observed distribution as a function of parameter values, multiply this likelihood function by the Prior Distribution, and normalize to obtain a unit probability over all possible values. This is called the Posterior Distribution. The Mode of the distribution is then the parameter estimate, and ``probability intervals'' (the Bayesian analog of Confidence Intervals) can be calculated using the standard procedure. Bayesian analysis is somewhat controversial because the validity of the result depends on how valid the Prior Distribution is, and this cannot be assessed statistically.

**References**

Hoel, P. G.; Port, S. C.; and Stone, C. J. *Introduction to Statistical Theory.*
New York: Houghton Mifflin, pp. 36-42, 1971.

Iversen, G. R. *Bayesian Statistical Inference.* Thousand Oaks, CA: Sage Pub., 1984.

Press, W. H.; Flannery, B. P.; Teukolsky, S. A.; and Vetterling, W. T.
*Numerical Recipes in FORTRAN: The Art of Scientific Computing, 2nd ed.* Cambridge, England: Cambridge
University Press, pp. 799-806, 1992.

Sivia, D. S. *Data Analysis: A Bayesian Tutorial.* New York: Oxford University Press, 1996.

© 1996-9

1999-05-26