36-401 Modern Regression HW #9 Solutionslarry/=stat401/HW9sol.pdf · 36-401 Modern Regression HW #9...

14
36-401 Modern Regression HW #9 Solutions DUE: 12/1/2017 at 3PM Problem 1 [44 points] (a) (7 pts.) Let SSE = n i=1 (Y i - βX i ) 2 . ∂β SSE = -2 n i=1 (Y i - βX i )X i Set ∂β SSE =0. Then, -2 n i=1 (Y i - βX i )X i =0 n i=1 (Y i X i - βX 2 i )=0 β = n i=1 Y i X i n i=1 X 2 i . And 2 ∂β 2 SSE =2 n i=1 X 2 i > 0 So n i=1 Y i X i n i=1 X 2 i is indeed the unique least squares estimator, denoted β. (b) (7 pts.) Let WSSE = n i=1 (Y i - βX i ) 2 σ 2 i = n i=1 Y i - βX i σ i 2 ∂β WSSE = -2 n i=1 Y i X i - βX 2 i σ 2 i 1

Transcript of 36-401 Modern Regression HW #9 Solutionslarry/=stat401/HW9sol.pdf · 36-401 Modern Regression HW #9...

Page 1: 36-401 Modern Regression HW #9 Solutionslarry/=stat401/HW9sol.pdf · 36-401 Modern Regression HW #9 Solutions DUE: 12/1/2017 at 3PM Problem 1 [44 points] (a) (7 pts.) Let SSE= Xn

36-401 Modern Regression HW #9 SolutionsDUE: 12/1/2017 at 3PM

Problem 1 [44 points]

(a) (7 pts.)

Let

SSE =n∑i=1

(Yi − βXi)2.

∂βSSE = −2

n∑i=1

(Yi − βXi)Xi

Set∂

∂βSSE = 0.

Then,

−2n∑i=1

(Yi − βXi)Xi = 0

n∑i=1

(YiXi − βX2i ) = 0

β =∑ni=1 YiXi∑ni=1 X

2i

.

And∂2

∂β2SSE = 2n∑i=1

X2i

> 0

So ∑ni=1 YiXi∑ni=1 X

2i

is indeed the unique least squares estimator, denoted β.

(b) (7 pts.)

Let

WSSE =n∑i=1

(Yi − βXi)2

σ2i

=n∑i=1

(Yi − βXi

σi

)2

∂βWSSE = −2

n∑i=1

(YiXi − βX2

i

σ2i

)

1

Page 2: 36-401 Modern Regression HW #9 Solutionslarry/=stat401/HW9sol.pdf · 36-401 Modern Regression HW #9 Solutions DUE: 12/1/2017 at 3PM Problem 1 [44 points] (a) (7 pts.) Let SSE= Xn

Set∂

∂βWSSE = 0.

Then,

β =

n∑i=1

YiXi

σ2i

n∑i=1

X2i

σ2i

And∂2

∂β2WSSE = 2n∑i=1

X2i

σ2i

> 0

Son∑i=1

YiXi

σ2i

n∑i=1

X2i

σ2i

is indeed the unique weighted least squares estimator, denoted β.

(c) (7 pts.)

E[β] = E

[∑ni=1 YiXi∑ni=1 X

2i

]

=∑ni=1 XiE[Yi]∑ni=1 X

2i

=β∑ni=1 X

2i∑n

i=1 X2i

= β

Var(β) = Var(∑n

i=1 YiXi∑ni=1 X

2i

)

=∑ni=1 X

2i σ

2i(∑n

i=1 X2i

)2

E[β] = E

[∑ni=1

YiXi

σ2i∑n

i=1X2

i

σ2i

]

= 1∑ni=1

X2i

σ2i

n∑i=1

XiE[Yi]σ2i

= β∑ni=1

X2i

σ2i

n∑i=1

X2i

σ2i

= β

2

Page 3: 36-401 Modern Regression HW #9 Solutionslarry/=stat401/HW9sol.pdf · 36-401 Modern Regression HW #9 Solutions DUE: 12/1/2017 at 3PM Problem 1 [44 points] (a) (7 pts.) Let SSE= Xn

Var(β) = Var(∑n

i=1YiXi

σ2i∑n

i=1X2

i

σ2i

)

= 1(∑ni=1

X2i

σ2i

)2Var(

n∑i=1

YiXi

σ2i

)

= 1(∑ni=1

X2i

σ2i

)2

n∑i=1

X2i Var(Yi)σ4i

=

∑ni=1

X2i

σ2i(∑n

i=1X2

i

σ2i

)2

= 1∑ni=1

X2i

σ2i

(d) (8 pts.)

Var(β) = 1∑ni=1

X2i

σ2i

=∑ni=1 X

2i σ

2i∑n

i=1X2

i

σ2i

∑ni=1 X

2i σ

2i

=∑ni=1 X

2i σ

2i∑n

i=1

(Xi

σi

)2∑ni=1(Xiσi

)2

≤∑ni=1 X

2i σ

2i(∑n

i=1 X2i

)2

= Var(β),where the inequality comes from Cauchy-Schwartz.

(e) (7 pts.)

Above we showed

β = 1n∑i=1

X2i

σ2i

n∑i=1

Xi

σ2i

Yi.

β is a linear combination of Normal random variables Yi, and thus also Normally distributed. We havealready found the mean and variance in part (c). Therefore,

β ∼ N

(β,

1∑ni=1

X2i

σ2i

)

and a 1 − α confidence interval for β is

β ± zα/2

√√√√ 1∑ni=1

X2i

σ2i

.

3

Page 4: 36-401 Modern Regression HW #9 Solutionslarry/=stat401/HW9sol.pdf · 36-401 Modern Regression HW #9 Solutions DUE: 12/1/2017 at 3PM Problem 1 [44 points] (a) (7 pts.) Let SSE= Xn

(f) (8 pts.)

set.seed(100)n = 100b.OLS <- rep(NA,1000)b.WLS.known.var <- rep(NA,1000)b.WLS.unknown.var <- rep(NA,1000)for (itr in 1:1000){

x = runif(n)s = x^2y = 3*x + rnorm(n, mean = 0, sd = s)

out <- lm(y ~ x - 1)b.OLS[itr] <- out$coefficients[1]out2 <- lm(y ~ x - 1, weights = 1/s^2)b.WLS.known.var[itr] <- out2$coefficients[1]u = log((resid(out))^2)tmp = loess(u ~ x)s2 = exp(tmp$fitted)

w = 1/s2out3 = lm(y ~ x - 1,weights=w)b.WLS.unknown.var[itr] <- out3$coefficients[1]

}

4

Page 5: 36-401 Modern Regression HW #9 Solutionslarry/=stat401/HW9sol.pdf · 36-401 Modern Regression HW #9 Solutions DUE: 12/1/2017 at 3PM Problem 1 [44 points] (a) (7 pts.) Let SSE= Xn

2.5 2.6 2.7 2.8 2.9 3 3.1 3.2 3.3 3.4 3.5

010

2030

4050

6070

80

OLS beta

Fre

quen

cy

2.5 2.6 2.7 2.8 2.9 3 3.1 3.2 3.3 3.4 3.5

050

100

150

200

250

300

WLS beta, known s

Fre

quen

cy

2.5 2.6 2.7 2.8 2.9 3 3.1 3.2 3.3 3.4 3.5

025

5075

100

125

150

WLS beta, unknown s

Fre

quen

cy

Table 1: Variances of Estimators

OLS WLS.known.s WLS.unknown.s0.0131942 6.93e-05 0.0008159

If we are dealing with a highly heteroskedastic data set such as this, and do not know the variance of thenoise, using weighted least squares based on estimated variances is a better strategy.

5

Page 6: 36-401 Modern Regression HW #9 Solutionslarry/=stat401/HW9sol.pdf · 36-401 Modern Regression HW #9 Solutions DUE: 12/1/2017 at 3PM Problem 1 [44 points] (a) (7 pts.) Let SSE= Xn

Problem 2 [28 points]

(a) (7 pts.)

Day

5 10 20 0.0 0.5 1.0 1.5

040

80

515

25

n

ybar

1020

0 40 80

0.0

1.0

10 15 20 25

SD

Day

5 10 15 0.5 1.0 1.5 2.0

040

80

510

15

n

ybar

1020

30

0 40 80

0.5

1.5

10 20 30

SD

6

Page 7: 36-401 Modern Regression HW #9 Solutionslarry/=stat401/HW9sol.pdf · 36-401 Modern Regression HW #9 Solutions DUE: 12/1/2017 at 3PM Problem 1 [44 points] (a) (7 pts.) Let SSE= Xn

0 20 40 60 80 100

1015

2025

Day

ybar

Type 0Type 1

A common intercept looks feasible, however, ybar appears to increase at a faster rate in Type 1.

(b) (7 pts.)

#### Call:## lm(formula = ybar ~ Day * Type, data = allshoots)#### Residuals:## Min 1Q Median 3Q Max## -1.74747 -0.21000 0.08631 0.35212 0.89507#### Coefficients:## Estimate Std. Error t value Pr(>|t|)## (Intercept) 9.475879 0.230981 41.025 < 2e-16 ***## Day 0.187238 0.003696 50.655 < 2e-16 ***## Type 0.339406 0.329997 1.029 0.309## Day:Type 0.031217 0.005625 5.550 1.21e-06 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1#### Residual standard error: 0.5917 on 48 degrees of freedom## Multiple R-squared: 0.9909, Adjusted R-squared: 0.9903## F-statistic: 1741 on 3 and 48 DF, p-value: < 2.2e-16

7

Page 8: 36-401 Modern Regression HW #9 Solutionslarry/=stat401/HW9sol.pdf · 36-401 Modern Regression HW #9 Solutions DUE: 12/1/2017 at 3PM Problem 1 [44 points] (a) (7 pts.) Let SSE= Xn

Table 2: 90% confidence intervals for regression coefficeints

5 % 95 %(Intercept) 9.0884732 9.8632855Day 0.1810386 0.1934377Type -0.2140739 0.8928853Day:Type 0.0217825 0.0406507

0 20 40 60 80 100

−2

−1

01

Day

Stu

dent

ized

res

idua

ls

Type 0Type 1

0 1

−3

−2

−1

01

Type

Stu

dent

ized

res

idua

ls

10 15 20 25 30

−2.

0−

1.0

0.0

0.5

1.0

Fitted values

Res

idua

ls

Residuals vs Fitted

35

3 15

−2 −1 0 1 2

−3

−2

−1

01

2

Theoretical Quantiles

Sta

ndar

dize

d re

sidu

als

Normal Q−Q

35

315

0 10 20 30 40 50

0.00

0.05

0.10

0.15

0.20

0.25

Obs. number

Coo

k's

dist

ance

Cook's distance

35

52

3

0.00 0.05 0.10 0.15 0.20

−3

−2

−1

01

2

Leverage

Sta

ndar

dize

d re

sidu

als

Cook's distance1

0.5

Residuals vs Leverage

35

52

3

8

Page 9: 36-401 Modern Regression HW #9 Solutionslarry/=stat401/HW9sol.pdf · 36-401 Modern Regression HW #9 Solutions DUE: 12/1/2017 at 3PM Problem 1 [44 points] (a) (7 pts.) Let SSE= Xn

(c) (7 pts.)

#### Call:## lm(formula = ybar ~ Day * Type, data = allshoots, weights = n)#### Weighted Residuals:## Min 1Q Median 3Q Max## -4.2166 -0.8300 0.1597 0.9882 3.3196#### Coefficients:## Estimate Std. Error t value Pr(>|t|)## (Intercept) 9.488374 0.238615 39.764 < 2e-16 ***## Day 0.187258 0.003486 53.722 < 2e-16 ***## Type 0.485380 0.362496 1.339 0.187## Day:Type 0.030072 0.005800 5.185 4.28e-06 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1#### Residual standard error: 1.675 on 48 degrees of freedom## Multiple R-squared: 0.9906, Adjusted R-squared: 0.9901## F-statistic: 1695 on 3 and 48 DF, p-value: < 2.2e-16

Table 3: 90% confidence intervals for weighted regression coefficeints

5 % 95 %(Intercept) 9.0881641 9.8885842Day 0.1814118 0.1931043Type -0.1226072 1.0933663Day:Type 0.0203446 0.0397999

9

Page 10: 36-401 Modern Regression HW #9 Solutionslarry/=stat401/HW9sol.pdf · 36-401 Modern Regression HW #9 Solutions DUE: 12/1/2017 at 3PM Problem 1 [44 points] (a) (7 pts.) Let SSE= Xn

(d) (7 pts.)

0 20 40 60 80 100

1015

2025

Day

ybar

Type 0Type 1

10

Page 11: 36-401 Modern Regression HW #9 Solutionslarry/=stat401/HW9sol.pdf · 36-401 Modern Regression HW #9 Solutions DUE: 12/1/2017 at 3PM Problem 1 [44 points] (a) (7 pts.) Let SSE= Xn

Problem 3 [28 points]

(a) (7 pts.)

#### Call:## lm(formula = FoodIndex ~ ., data = BigMac2003)#### Residuals:## Min 1Q Median 3Q Max## -27.0642 -6.3965 -0.0262 5.6928 26.3002#### Coefficients:## Estimate Std. Error t value Pr(>|t|)## (Intercept) -1.09968 11.19872 -0.098 0.9221## BigMac -0.20569 0.07798 -2.638 0.0107 *## Bread 0.44383 0.10564 4.201 9.11e-05 ***## Rice 0.26881 0.13597 1.977 0.0527 .## Bus 3.59014 2.83317 1.267 0.2101## Apt 0.01825 0.00434 4.204 9.02e-05 ***## TeachGI -0.97768 0.86750 -1.127 0.2643## TeachNI 2.22275 1.13819 1.953 0.0556 .## TaxRate 0.26530 0.25724 1.031 0.3066## TeachHours 0.48015 0.20478 2.345 0.0224 *## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1#### Residual standard error: 11.86 on 59 degrees of freedom## Multiple R-squared: 0.7981, Adjusted R-squared: 0.7673## F-statistic: 25.91 on 9 and 59 DF, p-value: < 2.2e-16

11

Page 12: 36-401 Modern Regression HW #9 Solutionslarry/=stat401/HW9sol.pdf · 36-401 Modern Regression HW #9 Solutions DUE: 12/1/2017 at 3PM Problem 1 [44 points] (a) (7 pts.) Let SSE= Xn

50 100 150

−2

−1

01

2

BigMac

Stu

d. R

esid

s

20 40 60 80

−2

−1

01

2

Bread

Stu

d. R

esid

s

20 40 60 80

−2

−1

01

2

Rice

Stu

d. R

esid

s

0 1 2 3

−2

−1

01

2

Bus

Stu

d. R

esid

s

500 1000 1500 2000

−2

−1

01

2

Apt

Stu

d. R

esid

s

0 20 40 60 80

−2

−1

01

2

TeachGI

Stu

d. R

esid

s

0 10 20 30 40 50

−2

−1

01

2

TeachNI

Stu

d. R

esid

s

0 10 20 30 40

−2

−1

01

2

TaxRate

Stu

d. R

esid

s

12

Page 13: 36-401 Modern Regression HW #9 Solutionslarry/=stat401/HW9sol.pdf · 36-401 Modern Regression HW #9 Solutions DUE: 12/1/2017 at 3PM Problem 1 [44 points] (a) (7 pts.) Let SSE= Xn

20 30 40 50

−2

−1

01

2

TeachHours

Stu

d. R

esid

s

20 40 60 80 100

−30

−20

−10

010

2030

Fitted values

Res

idua

ls

Residuals vs Fitted

Miami

Tokyo

London

−2 −1 0 1 2

−2

−1

01

23

Theoretical Quantiles

Sta

ndar

dize

d re

sidu

als

Normal Q−Q

Mumbai

Tokyo

Miami

0 10 20 30 40 50 60 70

0.0

0.1

0.2

0.3

0.4

0.5

Obs. number

Coo

k's

dist

ance

Cook's distance

Mumbai

Nairobi

Shanghi

0.0 0.1 0.2 0.3 0.4 0.5

−3

−2

−1

01

23

Leverage

Sta

ndar

dize

d re

sidu

als

Cook's distance1

0.5

0.5

Residuals vs Leverage

Mumbai

Nairobi

Shanghi

13

Page 14: 36-401 Modern Regression HW #9 Solutionslarry/=stat401/HW9sol.pdf · 36-401 Modern Regression HW #9 Solutions DUE: 12/1/2017 at 3PM Problem 1 [44 points] (a) (7 pts.) Let SSE= Xn

(b) (7 pts.)

0.5 % 99.5 %BigMac -0.4132679 0.0018835

The confidence interval includes 0 so, given all the other variables, we cannot conclude the price of a BigMachas a significant association with Food Index at level α = 0.01.

(c) (7 pts.)

Res.Df RSS Df Sum of Sq F Pr(>F)67 27532.922 NA NA NA NA59 8299.912 8 19233.01 17.08975 0

We are testing the hypothesis that all other variables are conditionally uncorrelated with Food Index, giventhe price of a BigMac. The ANOVA table shows there is very strong evidence in favor of the alternative (wereject).

(d) (7 pts.)

library(DAAG)out1 <- cv.lm(df = BigMac2003, form.lm = formula(FoodIndex ~ .), m = 10, plotit = F)out2 <- cv.lm(df = BigMac2003, form.lm = formula(FoodIndex ~ BigMac), m = 10, plotit = F)

The predictive MSEs of each model are estimated by 10-fold cross validation. We conclude the model onlyutilizing BigMac has better predictive accuracy.

Errfull = 1764, ErrBigMac = 472

14