Bayesian Analysis with Stata

Bayesian Analysis with Stata is a compendium of Stata community-contributed commands for Bayesian analysis. It contains just enough theoretical and foundational material to be useful to all levels of users interested in Bayesian statistics, from neophytes to aficionados.

 

The book is careful to introduce concepts and coding tools incrementally so that there are no steep patches or discontinuities in the learning curve. The content helps the user see exactly what computations are done for simple standard models and shows the user how those computations are implemented. Understanding these concepts is important for users because Bayesian analysis lends itself to custom or very complex models, and users must be able to code these themselves. Bayesian Analysis with Stata is wonderful because it goes through the computational methods three times—first using Stata’s ado-code, then using Mata, and finally using Stata to run the MCMC chains with WinBUGS or OpenBUGS. This reinforces the material while making all three methods accessible and clear. Once the book explains the computations and underlying methods, it satisfies the user’s yearning for more complex models by providing examples and advice on how to implement such models. The book covers advanced topics while showing the basics of Bayesian analysis—which is quite an achievement.

 

Bayesian Analysis with Stata presents all the material using real datasets rather than simulated datasets, and there are many exercises that also use real datasets. There is also a chapter on validating code for users who like to learn by simulating models and recovering the known models. This provides users with the opportunity to gain experience in assessing and running Bayesian models and teaches users to be careful when doing so.

 

The book starts by discussing the principles of Bayesian analysis and by explaining the thought process underlying it. It then builds from the ground up, showing users how to write evaluators for posteriors in simple models and how to speed them up using algebraic simplification.

 

Of course, this type of evaluation is useful only in very simple models, so the book then addresses the MCMC methods used throughout the Bayesian world. Once again, this starts from the fundamentals, beginning with the Metropolis–Hastings algorithm and moving on to Gibbs samplers. Because the latter are much quicker to use but are often intractable, the book thoroughly explains the specialty tools of Griddy sampling, slice sampling, and adaptive rejection sampling.

 

After discussing the computational tools, the book changes its focus to the MCMC assessment techniques needed for a proper Bayesian analysis; these include assessing convergence and avoiding problems that can arise from slowly mixing chains. This is where burn-in gets treated, and thinning and centering are used for performance gains.

 

The book then returns its focus to computation. First, it shows users how to use Mata in place of Stata’s ado-code; second, it demonstrates how to pass data and models to WinBUGS or OpenBUGS and retrieve its output. Using Mata speeds up evaluation time. However, using WinBUGS or OpenBUGS further speeds evaluation time, and each one opens a toolbox, which reduces the amount of custom Stata programming needed for complex models. This material is easy for the book to introduce and explain because it has already laid the conceptual and computational groundwork.

 

The book finishes with detailed chapters on model checking and selection, followed by a series of case studies that introduce extra modeling techniques and give advice on specialized Stata code. These chapters are very useful because they allow the book to be a self-contained introduction to Bayesian analysis while providing additional information on models that are normally beyond a basic introduction.

List of figures
List of tables
Preface
Acknowledgments

 

1. THE PROBLEM OF PRIORS

Case study 1: An early phase vaccine trial
Bayesian calculations
Benefits of a Bayesian analysis
Selecting a good prior
Starting points
Exercises

 

2. EVALUATING THE POSTERIOR

Introduction
Case study 1: The vaccine trial revisited
Marginal and conditional distributions
Case study 2: Blood pressure and age
Case study 2: BP and age continued
General log posteriors
Adding distributions to logdensity
Changing parameterization
Starting points
Exercises

 

3. METROPOLIS-HASTINGS

Introduction
The MH algorithm in Stata
The mhs commands
Case study 3: Polyp counts
Scaling the proposal distribution
The mcmcrun command
Multiparameter models
Case study 3: Polyp counts continued
Highly correlated parameters

Centering
Block updating

Case study 3: Polyp counts yet again
Starting points
Exercises

 

4. GIBBS SAMPLING

Introduction
Case study 4: A regression model for pain scores
Conjugate priors
Gibbs sampling with nonstandard distributions

Griddy sampling
Slice sampling
Adaptive rejection

The gbs commands
Case study 4 continued: Laplace regression
Starting points
Exercises

 

5. ASSESSING CONVERGENCE

Introduction
Detecting early drift
Detecting too short a run

Thinning the chain

Running multiple chains
Convergence of functions of the parameters
Case study 5: Beta-blocker trials
Further reading
Exercises

 

6. VALIDATING THE STATA CODE AND SUMMARIZING THE RESULTS

Introduction
Case study 6: Ordinal regression
Validating the software
Numerical summaries
Graphical summaries
Further reading
Exercises

 

7. BAYESIAN ANALYSIS WITH MATA

Introduction
The basics of Mata
Case study 6: Revisited
Case study 7: Germination of broomrape

Tuning the proposal distributions
Using conditional distributions
More efficient computation
Hierarchical centering
Gibbs sampling
Slice, Griddy, and ARMS sampling
Timings
Adding new densities to logdensity()

Further reading
Exercises

 

8. USING WINBUGS FOR MODEL FITTING

Introduction
Installing the software

Installing OpenBUGS
Installing WinBUGS

Preparing a WinBUGS analysis

The model file
The data file
The initial values file
The script file
Running the script
Reading the results into Stata
Inspecting the log file
Reading WinBUGS data files

Case study 8: Growth of sea cows

WinBUGS or OpenBUGS

Case study 9: Jawbone size

Overrelaxation
Changing the seed for the random-number generator

Advanced features of WinBUGS

Missing data
Censoring and truncation
Nonstandard likelihoods
Nonstandard priors
The cut() function

GeoBUGS
Programming a series of Bayesian analyses
OpenBUGS under Linux
Debugging WinBUGS
Starting points
Exercises

 

9. MODEL CHECKING

Introduction
Bayesian residual analysis
The mcmccheck command
Case study 10: Models for Salmonella assays

Generating the predictions in WinBUGS
Plotting the predictive distributions
Residual plots
Empirical probability plots
A summary plot

Residual checking with Stata
Residual checking with Mata
Further read
Exercises

 

10. MODEL SELECTION

Introduction
Case study 11: Choosing a genetic

Plausible models
Bayes factors

Calculating a BF
Calculating the BFs for the NTD case study
Robustness of the BF
Model averaging
Information criteria
DIC for the genetic models
Starting points
Exercises

 

11. FURTHER CASE STUDIES

Introduction
Case study 12: Modeling cancer incidence
Case study 13: Creatinine clearance
Case study 14: Microarray experiment
Case study 15: Recurrent asthma attacks
Exercises

 

12. WRITING STATA PROGRAMS FOR SPECIFIC BAYESIAN ANALYSIS

Introduction
The Bayesian lasso
The Gibbs sampler
The Mata code
A Stata ado–file
Testing the code
Case study 16: Diabetes data
Extensions to the Bayesian lasso program
Exercises

Author: John Thompson
ISBN-13: 978-1-59718-141-9
©Copyright: Stata Press 2014

The book is careful to introduce concepts and coding tools incrementally so that there are no steep patches or discontinuities in the learning curve. The content helps the user see exactly what computations are done for simple standard models and shows the user how those computations are implemented. Understanding these concepts is important for users because Bayesian analysis lends itself to custom or very complex models, and users must be able to code these themselves.