Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model:...

39
Generalized Linear Models Objectives: Systematic + Random. Exponential family. Maximum likelihood estimation & inference. 45 Heagerty, Bio/Stat 571

Transcript of Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model:...

Page 1: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

Generalized

Linear Models

Objectives:

• Systematic + Random.

• Exponential family.

• Maximum likelihood estimation & inference.

45 Heagerty, Bio/Stat 571

Page 2: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

Generalized Linear Models

• Models for independent observations

Yi, i = 1, 2, . . . , n.

• Components of a GLM:

. Random component

Yi ∼ f(Yi, θi, φ)

f ∈ exponential family

46 Heagerty, Bio/Stat 571

Page 3: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

. Systematic component

ηi = Xiβ

ηi : linear predictor

Xi : (1× p) covariate vector

β : (p× 1) regression coefficient

. Link function

E(Yi | Xi) = µi

g(µi) = Xiβ

g(·) : link function

47 Heagerty, Bio/Stat 571

Page 4: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

Generalized Linear Models

• GLMs generalize the standard linear model:

Yi = Xiβ + εi

. Random: Normal distribution

εi ∼ N (0, σ2)

. Systematic: linear combination of covariates

ηi = Xiβ

. Link: identity function

ηi = µi

48 Heagerty, Bio/Stat 571

Page 5: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

Generalized Linear Models

• GLMs extend usefully to overdispersed and correlated data:

. GEE: marginal models / semi-parametric estimation &

inference

. GLMM: conditional models / likelihood estimation & inference

49 Heagerty, Bio/Stat 571

Page 6: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

Exponential Family

(?) f(y; θ, φ) = exp[yθ − b(θ)

a(φ)+ c(y, φ)

]

θ = canonical parameter

φ = fixed (known) scale parameter

Properties: If Y ∼ f(y; θ, φ) in (?) then,

E(Y ) = µ = b′(θ)

var(Y ) = b′′(θ) · a(φ)

50 Heagerty, Bio/Stat 571

Page 7: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

Canonical link function: A function g(·) such that:

η = g(µ) = θ (canonical parameter)

Variance function: A function V (·) such that:

var(Y ) = V (µ) · a(φ)

Usually : a(φ) = φ · wφ “scale” parameter

w weight

51 Heagerty, Bio/Stat 571

Page 8: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

Examples of GLMS: logistic regression

y = s/m where s=number of successes / m trials

f(y; θ, φ) =(

m

s

)πs(1− π)m−s

= exp[y · log( π

1−π ) + log(1− π)1/m

+ log(

m

s

)]

=⇒θ = log(π/(1− π))

b(θ) = − log(1− π) = log[1 + exp(θ)]

µ = b′(θ) =∂

∂θlog[1 + exp(θ)]

= exp(θ)/[1 + exp(θ)] = π

52 Heagerty, Bio/Stat 571

Page 9: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

g(µ) = log[π/(1− π)] = θ

g : logit, log-odds function

var(y) = π(1− π) · 1m

V (µ) =

a(φ) =

53 Heagerty, Bio/Stat 571

Page 10: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

• Poisson regression

y = number of events (count)

f(y; θ, φ) = λy exp(−λ)/y!

= exp [y · log(λ)− λ− log(y!)]

=⇒θ = log(λ)

b(θ) = λ = exp(θ)

µ = b′(θ) = exp(θ) = λ

g(µ) = θ = log(µ)

g : canonical link is log

54 Heagerty, Bio/Stat 571

Page 11: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

• Poisson regression (continued)

var(y) = λ

V (µ) =

a(φ) =

◦ Other examples:

. gamma, inverse Gaussian (MN, Table 2.1)

. some survival models (MN, Chpt. 13)

55 Heagerty, Bio/Stat 571

Page 12: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

Example: Seizure data (DHLZ ex. 1.6)

• Clinical trial of progabide, evaluating impact on epileptic seizures.

• Data:

. age = patient age in years

. base = 8-week baseline seizure count (pre-tx)

. tx = 0 if assigned placebo; 1 if assigned progabide

. Y1, Y2, Y3, Y4 seizure counts in 4 two-week periods following

treatment administration

• Models:

. linear model: Y4 = age + base + tx + ε

. Poisson GLM: log(µ4) = age + base + tx

56 Heagerty, Bio/Stat 571

Page 13: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

Example: Seizure data (DHLZ ex. 1.6)

linear regression Poisson regression

est. s.e. Z est. s.e. Z

(Int) -4.97 3.62 -1.37 0.778 0.285 2.73

age 0.12 0.11 1.07 0.014 0.009 1.64

base 0.31 0.03 11.79 0.022 0.001 20.27

tx -1.36 1.37 -0.99 -0.270 0.102 -2.66

• Q: should we use log(base) for Poisson regression?

• Q: why does inference regarding significance of TX differ?

57 Heagerty, Bio/Stat 571

Page 14: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

•••

••

••

••

••

••

•••• ••

••

••

••

• • •

••••

• ••

•••

•••

••

•••

base

y4

20 40 60 80 100 120 140

010

2030

4050

60

•••

••

••

••

•••

••

••

••

••

• •

••

••

base

log(

y4 +

0.5

)

20 40 60 80 100 120 140

01

23

4

0 20 40 60

05

1015

y4[trt == 0]

0 20 40 60

02

46

810

1214

y4[trt == 1]

58 Heagerty, Bio/Stat 571

Page 15: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

Seizure Residuals vs. Fitted

••

••

••

••

• •

• •

••

••

••

lmfit$fitted

lmfit

$res

id

0 10 20 30 40

−10

010

20

••

• •

••

•••

••

• •

••

••

••

••

••

•••

•••

••

••

••

••

••

••

••

glmfit$fitted

glm

fit$r

esid

10 20 30 40 50 60

−1.0

−0.5

0.0

0.5

1.0

1.5

2.0

••

••

••

••

••

•••

••

••

••

••

••

••

59 Heagerty, Bio/Stat 571

Page 16: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

Seizure Residuals vs. Fitted, using predict()

••

••

••

••

• •

• •

••

••

••

predict(lmfit)

resi

d(lm

fit)

0 10 20 30 40

−10

010

20

••

• •

••

•••

••

• •

••

••

••

••

••

•••

•••

••

••

••

••

••

predict(glmfit)

resi

d(gl

mfit

)

1.0 1.5 2.0 2.5 3.0 3.5 4.0

−20

24

••

••

••

••

••

••

••

••

••

••

60 Heagerty, Bio/Stat 571

Page 17: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

Residual Diagnostics

◦ Used to assess model fit similarly as for linear models

• Q-Q plots for residuals

(may be hard to interpret for discrete data )

• residual plots:

? vs. fitted values

? vs. omitted covariates

• assessment of systematic departures

• assessment of variance function

61 Heagerty, Bio/Stat 571

Page 18: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

Residual Diagnostics

◦ Types of residuals for GLMs:

1. Pearson residual

rP =yi − µi√

V (µi)∑(rP

i )2 = X2

2. Deviance residual (see resid(fit))

rDi = sign(y − µ)

√di∑

(rDi )2 = D(y, µ)

3. Working residual (see fit$resid)

rWi = (yi − µi)

∂ηi

∂µi= Zi − ηi

62 Heagerty, Bio/Stat 571

Page 19: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

Fitting GLMS by Maximum Likelihood

Solve score equations:

Uj(β) =∂

∂βjlog L = 0 j = 1, 2, . . . , p

log-likelihood:

log L =n∑

i=1

[yi · θi − b(θi)

ai(φ)+ c(yi, φ)

]

=∑

log Li

=⇒Uj(β) =

∂ log L

∂βj=

i

∂ log Li

∂θi· ∂θi

∂µi· ∂µi

∂ηi· ∂ηi

∂βj

63 Heagerty, Bio/Stat 571

Page 20: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

∂ log Li

∂θi=

1ai(φ)

(yi − b′(θi)) =1

ai(φ)(yi − µi)

∂θi

∂µi=

(∂µi

∂θi

)−1

= 1/V (µi)

∂ηi

∂βj= Xij

Therefore,

Uj(β) =n∑

i=1

(Xij

∂µi

∂ηi

)· [ai(φ) · V (µi)]

−1 (Yi − µi)

64 Heagerty, Bio/Stat 571

Page 21: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

GLM Information Matrix

• Either form:

[In](j, k) = cov[Uj(β), Uk(β)]

= −E

(∂2 log L

∂βj∂βk

)

• Let’s consider the second form...

65 Heagerty, Bio/Stat 571

Page 22: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

GLM Information Matrix

[In](j, k) = −E

[∂

∂βkUj(β)

]

= −E

[n∑

i=1

βk

{(∂µi

∂βj

)· [ai(φ) · V (µi)]

−1 (Yi − µi)}]

=n∑

i=1

(∂µi

∂βj

)· [ai(φ) · V (µi)]

−1

(∂µi

∂βk

)

justify

66 Heagerty, Bio/Stat 571

Page 23: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

Score and Information

• In vector/matrix form we have:

U(β) =

U1(β)

U2(β)...

Up(β)

∂µi

∂β=

(∂µi

∂β1

∂µi

∂β2. . . ∂µi

∂βp

)

= Xi∂µi

∂ηi

67 Heagerty, Bio/Stat 571

Page 24: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

Score and Information

U(β) =n∑

i=1

(∂µi

∂β

)T

· [ai(φ) · V (µi)]−1 (Yi − µi)

and

In =n∑

i=1

(∂µi

∂β

)T

· [ai(φ) · V (µi)]−1

(∂µi

∂β

)

68 Heagerty, Bio/Stat 571

Page 25: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

Fisher Scoring

Goal: Solve the score equations

U(β) = 0

Iterative estimation is required for most GLMs. The score equations

can be solved using Newton-Raphson (uses observed derivative of

score) or Fisher Scoring which uses the expected derivative of the

score (ie. −In).

69 Heagerty, Bio/Stat 571

Page 26: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

Fisher Scoring

Algorithm:

• Pick an initial value: β(0)

.

• For j → (j + 1) update β(j)

via

β(j+1)

= β(j)

+(I(j)

n

)−1

U(β(j)

)

• Evaluate convergence using changes in log L or ‖β(j+1) − β(j)‖.

• Iterate until convergence criterion is satisfied.

70 Heagerty, Bio/Stat 571

Page 27: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

Comments on Fisher Scoring:

1. IWLS is equivalent to Fisher Scoring (Biostat 570).

2. Observed and expected information are equivalent for canonical

links.

3. Score equations are an example of an estimating function(more on that to come!)

4. Q: What assumptions make E[U(β)] = 0?

5. Q: What is the relationship between In and∑

U iUTi ?

6. Q: What is a 1-step approximation to ∆β(−i)?

71 Heagerty, Bio/Stat 571

Page 28: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

Inference for GLMs

Review of asymptotic likelihood theory:

β =

β1

−−−β2

=

(q × 1)

−−−(p− q × 1)

Goal: Test H0 : β2 = β02

(1) Likelihood Ratio Test:

2[log L(β1, β2)− log L(β

0

1, β02)

]∼ χ2(df = p− q)

72 Heagerty, Bio/Stat 571

Page 29: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

Inference for GLMs

(2) Score Test:

U(β) =

U1(β1)

−−−U2(β2)

=

(q × 1)

−−−(p− q × 1)

U2(β0)T

{cov[U2(β

0)]

}−1

U2(β0) ∼ χ2(df = p− q)

(3) Wald Test:

(β2 − β02)

T{

cov(β2)}−1

(β2 − β02) ∼ χ2(df = p− q)

73 Heagerty, Bio/Stat 571

Page 30: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

Measures of Discrepancy

There are 2 primary measures:

• deviance

• Pearson’s X2

Deviance: Assume ai(φ) = φ/mi

(eg. normal: φ; binomial: 1/mi; Poisson: 1)

log L(β) =n∑

i=1

log fi(yi; θi, φ)

=∑

i

{mi

φ[yiθi − b(θi)] + ci(yi, φ)

}

74 Heagerty, Bio/Stat 571

Page 31: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

Now consider log L as a function of µ, using the relationship b′(θ) = µ:

l(µ, φ; y) =∑

i

{mi

φ[yi · θ(µi)− b[θ(µi)] ] + ci(yi, φ)

}

The deviance is:

D(y, µ) = 2 · φ · [l(y, φ; y)− l(µ, φ;y)]

= 2 ·∑

i

mi { yi · [θ(yi)− θ(µi)] −

( b[θ(yi)]− b[θ(µi)] ) }

75 Heagerty, Bio/Stat 571

Page 32: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

Deviance

Deviance generalizes the residual sum of squares for linear models:

Model 1 Model 2

β1

−−−β2

(q × 1)

−−−(p− q × 1)

β1

−−−β0

2

µ1 µ2

76 Heagerty, Bio/Stat 571

Page 33: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

Deviance

Linear Model:

SSE(Model 2)− SSE(Model 1)σ2

∼ χ2(df = p− q)

GLM:

D(y, µ2)−D(y, µ1)φ

∼ χ2(df = p− q)

77 Heagerty, Bio/Stat 571

Page 34: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

Examples:

1. Normal: log f(yi; θi, φ) = − (yi−µi)2

2φ + C

D(y, µ) =∑

i

(yi − µi)2 = SSE

2. Poisson: log f(yi; θi, φ) = yi · log(µ)− µ + C

D(y, µ) = 2×[∑

i

yi · log(

yi

µi

)− (yi − µi)

]

3. Binomial: log f(yi; θi, φ) = mi

[yi · log

1−µ

)+ log(1− µ)

]

D(y, µ) = 2×[∑

i

yi · log(

yi

µi

)− (1− yi) · log

(1− yi

1− µi

)]

78 Heagerty, Bio/Stat 571

Page 35: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

Pearson’s X2

Assume: var(Yi) = φmi

V (µi)

Define: X2 =∑

i(yi − µi)2/[V (µi)/mi]

Examples:

1. Normal: X2 = SSE

2. Poisson: X2 = (yi − µi)2/µi (look familiar?)

3. Binomial: X2 = (yi − µi)2/[µi(1− µi)]

(??) If the model is correct (mean and variance) then,

X2

(n− p)≈ φ

79 Heagerty, Bio/Stat 571

Page 36: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

e.g.

◦ Normal: SSE/(n− p) ≈ σ2 = φ

◦ Poisson: X2/(n− p) ≈ 1 = φ

◦ Binomial: X2/(n− p) ≈ 1 = φ

80 Heagerty, Bio/Stat 571

Page 37: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

Example: Seizure data (DLZ ex. 1.6)

X2

(n− p)=

136.6459− 4

= 2.48

D(y, µ)(n− p)

=147.0259− 4

= 2.67

Q: Poisson???

81 Heagerty, Bio/Stat 571

Page 38: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

Summary:

• GLMs applicable to range of univariate outcomes.

• Systematic variation (regression )

Random variation (variance function, likelihood)

• Score equations of simple form.

• Inference using:

likelihood ratios (deviance)

score statistics

Wald statistics

• Model checking

regression structure / variance form V (µ)

82 Heagerty, Bio/Stat 571

Page 39: Generalized Linear Models...Generalized Linear Models † GLMs generalize the standard linear model: Yi = Xifl + †i Random: Normal distribution †i » N (0;¾2) Systematic: linear

'

&

$

%

References:

McCullagh P., and Nelder J.N. Generalized Linear Models, SecondEdition, Chapman and Hall, 1989.

Fahrmeir L., and Tutz G. Multivariate Statistical Modelling Based onGeneralized Linear Models, Springer-Verlag, 1991.

(see chapter 2)

McCulloch C.E., and Searle S.R. Generalized, Linear, and MixedModels, Wiley, 2001.

(see chapter 5)

Diggle P., Heagerty P.J., Liang K-Y., Zeger S.L. Longitudinal DataAnalysis, Second Edition, Oxford, 2002.

(see appendix A)

83 Heagerty, Bio/Stat 571