(7) Bayesian linear regressionreich/ABA/notes/BLR.pdf · 2018-03-12 · I It includes as special...

21
(7) Bayesian linear regression ST440/540: Applied Bayesian Statistics Spring, 2018 ST440/540: Applied Bayesian Statistics (7) Bayesian linear regression

Transcript of (7) Bayesian linear regressionreich/ABA/notes/BLR.pdf · 2018-03-12 · I It includes as special...

Page 1: (7) Bayesian linear regressionreich/ABA/notes/BLR.pdf · 2018-03-12 · I It includes as special cases the t-test and ANOVA ... and the interpretation is different in all cases ST440/540:

(7) Bayesian linear regression

ST440/540: Applied Bayesian Statistics

Spring, 2018

ST440/540: Applied Bayesian Statistics (7) Bayesian linear regression

Page 2: (7) Bayesian linear regressionreich/ABA/notes/BLR.pdf · 2018-03-12 · I It includes as special cases the t-test and ANOVA ... and the interpretation is different in all cases ST440/540:

Bayesian linear regression

I Linear regression is by far the most common statisticalmodel

I It includes as special cases the t-test and ANOVA

I The multiple linear regression model is

Yi ∼ Normal(β0 + Xi1β1 + ...+ Xipβp, σ2)

independently across the i = 1, ...,n observations

I As we’ll see, Bayesian and classical linear regression aresimilar if n >> p and the priors are uninformative.

I However, the results can be different for challengingproblems, and the interpretation is different in all cases

ST440/540: Applied Bayesian Statistics (7) Bayesian linear regression

Page 3: (7) Bayesian linear regressionreich/ABA/notes/BLR.pdf · 2018-03-12 · I It includes as special cases the t-test and ANOVA ... and the interpretation is different in all cases ST440/540:

Review of least squares

I The least squares estimate of β = (β0, β1, ..., βp)T is

β̂OLS = argminβ

n∑i=1

(Yi − µi)2

where µi = β0 + Xi1β1 + ...+ Xipβp

I β̂OLS is unbiased even if the errors are non-Gaussian

I If the errors are Gaussian then the likelihood isproportional to

n∏i=1

exp

[−(Yi − µi)

2

2σ2

]= exp

[−∑n

i=1(Yi − µi)2

2σ2

]

I Therefore, if the errors are Gaussian β̂OLS is also the MLE

ST440/540: Applied Bayesian Statistics (7) Bayesian linear regression

Page 4: (7) Bayesian linear regressionreich/ABA/notes/BLR.pdf · 2018-03-12 · I It includes as special cases the t-test and ANOVA ... and the interpretation is different in all cases ST440/540:

Review of least squares

I Linear regression is often simpler to describe using linearalgebra notation

I Let Y = (Y1, ...,Yn)T be the response vector and X be the

n × (p + 1) matrix of covariates

I Then the mean of Y is Xβ and the least squares solution is

β̂OLS = argminβ

(Y− Xβ)T (Y− Xβ) = (XT X)−1XT Y

I If the errors are Gaussian then the sampling distribution is

β̂OLS ∼ Normal[β, σ2(XT X)−1

]I If the variance σ2 is estimated using the mean squared

residual error then the sampling distribution is multivariate t

ST440/540: Applied Bayesian Statistics (7) Bayesian linear regression

Page 5: (7) Bayesian linear regressionreich/ABA/notes/BLR.pdf · 2018-03-12 · I It includes as special cases the t-test and ANOVA ... and the interpretation is different in all cases ST440/540:

Bayesian regression

I The likelihood remains

Yi ∼ Normal(β0 + Xi1β1 + ...+ Xipβp, σ2)

independent for i = 1, ...,n observations

I As with a least squares analysis, it is crucial to verify this isappropriate using qq-plots, added variable plots, etc.

I A Bayesian analysis also requires priors for β and σ

I We will focus on prior specification since this piece isuniquely Bayesian.

ST440/540: Applied Bayesian Statistics (7) Bayesian linear regression

Page 6: (7) Bayesian linear regressionreich/ABA/notes/BLR.pdf · 2018-03-12 · I It includes as special cases the t-test and ANOVA ... and the interpretation is different in all cases ST440/540:

Priors

I For the purpose of setting priors, it is helpful to standardizeboth the response and each covariate to have mean zeroand variance one.

I Many priors for β have been considered:1. Improper priors

2. Gaussian priors

3. Double exponential priors

4. Many, many more...

ST440/540: Applied Bayesian Statistics (7) Bayesian linear regression

Page 7: (7) Bayesian linear regressionreich/ABA/notes/BLR.pdf · 2018-03-12 · I It includes as special cases the t-test and ANOVA ... and the interpretation is different in all cases ST440/540:

Improper priors

I The Jeffreys’ prior is flat p(β) = 1

I This is improper, but the posterior is proper under thesame conditions required by least squares

I If σ is known then

β|Y ∼ Normal[β̂OLS, σ

2(XT X)−1]

I See “Post beta” in http://www4.stat.ncsu.edu/~reich/ABA/Derivations7.pdf

I Therefore, the results should be similar to least squares

I How are they different?

ST440/540: Applied Bayesian Statistics (7) Bayesian linear regression

Page 8: (7) Bayesian linear regressionreich/ABA/notes/BLR.pdf · 2018-03-12 · I It includes as special cases the t-test and ANOVA ... and the interpretation is different in all cases ST440/540:

Improper priors

I Of course we rarely know σ

I Typically the error variance follows an InvGamma(a,b)prior with a and b set to be small, say a = b = 0.01.

I In this case the posterior of β follows a multivariate tcentered on β̂OLS

I Again, the results are similar to OLS

I The objective Bayes Jeffreys prior for θ = (β, σ) is

p(β, σ2) =1σ2

which is the limit as a,b → 0

ST440/540: Applied Bayesian Statistics (7) Bayesian linear regression

Page 9: (7) Bayesian linear regressionreich/ABA/notes/BLR.pdf · 2018-03-12 · I It includes as special cases the t-test and ANOVA ... and the interpretation is different in all cases ST440/540:

Multivariate normal prior

I Another common prior for is Zellner’s g-prior

β ∼ Normal[0,σ2

g(XT X)−1

]I This prior is proper assuming X is full rank

I The posterior mean is

11 + g

β̂OLS

I This shrinks the least estimate towards zero

I g controls the amount of shrinkage

I g = 1/n is common, and called the unit information prior

ST440/540: Applied Bayesian Statistics (7) Bayesian linear regression

Page 10: (7) Bayesian linear regressionreich/ABA/notes/BLR.pdf · 2018-03-12 · I It includes as special cases the t-test and ANOVA ... and the interpretation is different in all cases ST440/540:

Univariate Gaussian priors

I If there are many covariates or the covariates are collinear,then β̂OLS is unstable

I Independent priors can counteract collinearity

βj ∼ Normal(0, σ2/g)

independent over j

I The posterior mode is

argminβ

n∑i=1

(Yi − µi)2 + g

p∑j=1

β2j

I In classical statistics, this is known as the ridge regressionsolution and is used to stabilize the least squares solution

ST440/540: Applied Bayesian Statistics (7) Bayesian linear regression

Page 11: (7) Bayesian linear regressionreich/ABA/notes/BLR.pdf · 2018-03-12 · I It includes as special cases the t-test and ANOVA ... and the interpretation is different in all cases ST440/540:

BLASSO

I An increasingly-popular prior is the double exponential orBayesian LASSO prior

I The prior is βj ∼ DE(τ) which has PDF

f (β) ∝ exp

(−|β|τ

)I The square in the Gaussian prior is replaced with an

absolute value

I The shape of the PDF is thus more peaked at zero (nextslide)

I The BLASSO prior favors settings where there are many βjnear zero and a few large βj

I That is, p is large but most of the covariates are noise

ST440/540: Applied Bayesian Statistics (7) Bayesian linear regression

Page 12: (7) Bayesian linear regressionreich/ABA/notes/BLR.pdf · 2018-03-12 · I It includes as special cases the t-test and ANOVA ... and the interpretation is different in all cases ST440/540:

BLASSO

−3 −2 −1 0 1 2 3

0.0

0.2

0.4

0.6

0.8

1.0

β

Prio

rGaussianBLASSO

ST440/540: Applied Bayesian Statistics (7) Bayesian linear regression

Page 13: (7) Bayesian linear regressionreich/ABA/notes/BLR.pdf · 2018-03-12 · I It includes as special cases the t-test and ANOVA ... and the interpretation is different in all cases ST440/540:

BLASSO

I The posterior mode is

argminβ

n∑i=1

(Yi − µi)2 + g

p∑j=1

|βj |

I In classical statistics, this is known as the LASSO solution

I It is popular because it adds stability by shrinking estimatestowards zero, and also sets some coefficients to zero

I Covariates with coefficients set to zero can be removed

I Therefore, LASSO performs variables selection andestimation simultaneously

ST440/540: Applied Bayesian Statistics (7) Bayesian linear regression

Page 14: (7) Bayesian linear regressionreich/ABA/notes/BLR.pdf · 2018-03-12 · I It includes as special cases the t-test and ANOVA ... and the interpretation is different in all cases ST440/540:

Computing

I With flat or Gaussian (with fixed prior variance) priors theposterior is available in closed-form and Monte Carlosampling is not needed

I With normal priors all full conditionals are Gaussian orinverse gamma, and so Gibbs sampling is simple and fast

I JAGS works well, but there are R (and SAS and others)packages dedicated just to Bayesian linear regression thatare preferred for big/hard problems

I BLR is probably the most common

I http://www4.stat.ncsu.edu/~reich/ABA/code/regJAGS

ST440/540: Applied Bayesian Statistics (7) Bayesian linear regression

Page 15: (7) Bayesian linear regressionreich/ABA/notes/BLR.pdf · 2018-03-12 · I It includes as special cases the t-test and ANOVA ... and the interpretation is different in all cases ST440/540:

Computing for the BLASSO

I For the BLASSO prior the full conditionals are morecomplicated

I There is a trick to make all full conditional conjugate so thatGibbs sampling can be used

I Metropolis sampling works fine too

I BLR works well for BLASSO and is super fast

I JAGS can handle this as well,

I http://www4.stat.ncsu.edu/~reich/ABA/code/BLASSO

ST440/540: Applied Bayesian Statistics (7) Bayesian linear regression

Page 16: (7) Bayesian linear regressionreich/ABA/notes/BLR.pdf · 2018-03-12 · I It includes as special cases the t-test and ANOVA ... and the interpretation is different in all cases ST440/540:

Summarizing the results

I The standard summary is a table with marginal means and95% intervals for each βj

I This becomes unwieldy for large p

I Picking a subset of covariates is a crucial step in a linearregression analysis.

I We will discuss this later in the course.

I Common methods include cross-validation, informationcriteria, and stochastic search.

ST440/540: Applied Bayesian Statistics (7) Bayesian linear regression

Page 17: (7) Bayesian linear regressionreich/ABA/notes/BLR.pdf · 2018-03-12 · I It includes as special cases the t-test and ANOVA ... and the interpretation is different in all cases ST440/540:

Logistic regression

I Other forms of regression follow naturally from linearregression

I For example, for binary responses Yi ∈ {0,1} we mightuse logistic regression

logit[Prob(Yi = 1)] = ηi = β0 + β1Xi1 + ...+ βpXip

I The logit link is the log-odd logit(x) = log[x/(1− x)]

I Then βj represents the increase in the log odds of an eventcorresponding to a one-unit increase in covariate j

I The expit transformation expit(x) = exp(x)/[1 + exp(x)] isthe inverse, and

Prob(Yi = 1) = expit(ηi) ∈ [0,1]

ST440/540: Applied Bayesian Statistics (7) Bayesian linear regression

Page 18: (7) Bayesian linear regressionreich/ABA/notes/BLR.pdf · 2018-03-12 · I It includes as special cases the t-test and ANOVA ... and the interpretation is different in all cases ST440/540:

Logistic regression

I Bayesian logistic regression requires a prior for β

I All of the prior we have discussed for linear regression(Zellner, BLASSO, etc) apply

I Computationally the full conditional distributions are nolonger conjugate and so we must use Metropolis sampling

I The R function MCMClogit does this efficiently

I It is fast in JAGS too, for example http://www4.stat.ncsu.edu/~reich/ABA/code/GLM

ST440/540: Applied Bayesian Statistics (7) Bayesian linear regression

Page 19: (7) Bayesian linear regressionreich/ABA/notes/BLR.pdf · 2018-03-12 · I It includes as special cases the t-test and ANOVA ... and the interpretation is different in all cases ST440/540:

Predictions

I Say we have a new covariate vector Xnew and we wouldlike to predict the corresponding response Ynew

I A plug-in approach would fix β and σ at their posteriormeans β̂ and σ̂ to make predictions

Ynew |β̂, σ̂ ∼ Normal(Xnew β̂, σ̂2)

I However this plug-in approach suppresses uncertaintyabout β and σ

I Therefore these prediction intervals will be slightly toonarrow leading to undercoverage

ST440/540: Applied Bayesian Statistics (7) Bayesian linear regression

Page 20: (7) Bayesian linear regressionreich/ABA/notes/BLR.pdf · 2018-03-12 · I It includes as special cases the t-test and ANOVA ... and the interpretation is different in all cases ST440/540:

Posterior predicitive distribution (PPD)

I We should really account for all uncertainty when makingpredictions, including our uncertainty about β and σ

I We really want the PPD

p(Ynew |Y) =

∫f (Ynew ,β, σ|Y)dβdσ

=

∫f (Ynew |β, σ)f (β, σ|Y)dβdσ

I Marginalizing over the model parameters accounts for theiruncertainty

I The concept of the PPD applies generally (e.g., logisticregression) and means the distribution of the predictedvalue marginally over model parameters

ST440/540: Applied Bayesian Statistics (7) Bayesian linear regression

Page 21: (7) Bayesian linear regressionreich/ABA/notes/BLR.pdf · 2018-03-12 · I It includes as special cases the t-test and ANOVA ... and the interpretation is different in all cases ST440/540:

Posterior predicitive distribution (PPD)

I MCMC naturally gives draws from Ynew ’s PPD

I For MCMC iteration t we have β(t) and σ(t)

I For MCMC iteration t we sample

Y (t)new ∼ Normal(Xβ(t), σ(t)2

)

I Y (1)new , ...,Y

(S)new are samples from the PPD

I This is an example of the claim that “Bayesian methodsnaturally quantify uncertainty”

I http://www4.stat.ncsu.edu/~reich/ABA/code/Predict

ST440/540: Applied Bayesian Statistics (7) Bayesian linear regression