36-401 Modern Regression HW #2 Solutions - CMU …larry/=stat401/HW2sol.pdf36-401 Modern Regression...

15
36-401 Modern Regression HW #2 Solutions DUE: 9/15/2017 Problem 1 [36 points total] (a) (12 pts.) In Lecture Notes 4 we derived the following estimators for the simple linear regression model: β 0 = Y - β 1 X β 1 = c XY s 2 X , where c XY = 1 n n i=1 (X i - X)(Y i - Y ) and s 2 X = 1 n n i=1 (X i - X) 2 . Since the formula for β 0 depends on β 1 we will calculate Var( β 1 ) first. Some simple algebra 1 shows we can rewrite β 1 as β 1 = β 1 + 1 n n i=1 (X i - X) i s 2 X . Now, treating the X i s as fixed, we have Var ( β 1 ) = Var β 1 + 1 n n i=1 (X i - X) i s 2 X = Var 1 n n i=1 (X i - X) i s 2 X = 1 n 2 n i=1 (X i - X) 2 Var( i ) s 4 X = σ 2 n 2 n i=1 (X i - X) 2 s 4 X = σ 2 n s 2 X s 4 X = σ 2 n · s 2 X . 1 See (16)-(22) of Lecture Notes 4 1

Transcript of 36-401 Modern Regression HW #2 Solutions - CMU …larry/=stat401/HW2sol.pdf36-401 Modern Regression...

Page 1: 36-401 Modern Regression HW #2 Solutions - CMU …larry/=stat401/HW2sol.pdf36-401 Modern Regression HW #2 Solutions DUE: 9/15/2017 Problem 1 [36 points total] (a) (12 pts.)

36-401 Modern Regression HW #2 SolutionsDUE: 9/15/2017

Problem 1 [36 points total]

(a) (12 pts.)

In Lecture Notes 4 we derived the following estimators for the simple linear regression model:

β0 = Y − β1X

β1 = cXYs2X

,

where

cXY = 1n

n∑i=1

(Xi −X)(Yi − Y ) and s2X = 1

n

n∑i=1

(Xi −X)2.

Since the formula for β0 depends on β1 we will calculate Var(β1) first. Some simple algebra1 shows we canrewrite β1 as

β1 = β1 +1n

∑ni=1(Xi −X)εi

s2X

.

Now, treating the X ′is as fixed, we have

Var(β1)

= Var(β1 +

1n

∑ni=1(Xi −X)εi

s2X

)

= Var(

1n

∑ni=1(Xi −X)εi

s2X

)

=1n2

∑ni=1(Xi −X)2Var(εi)

s4X

=σ2

n2

∑ni=1(Xi −X)2

s4X

=σ2

n s2X

s4X

= σ2

n · s2X

.

1See (16)-(22) of Lecture Notes 4

1

Page 2: 36-401 Modern Regression HW #2 Solutions - CMU …larry/=stat401/HW2sol.pdf36-401 Modern Regression HW #2 Solutions DUE: 9/15/2017 Problem 1 [36 points total] (a) (12 pts.)

Thus, Var(β0) is given by

Var(β0) = Var(Y − β1X

)= Var(Y ) +X

2Var(β1)− 2XCov(

1n

n∑i=1

Yi,1n

∑ni=1(Xi −X)(Yi − Y )1n

∑ni=1(Xi −X)2

)

= σ2

n+X

2Var(β1)− 2Xn∑ni=1(Xi −X)2

Cov(

n∑i=1

Yi,

n∑i=1

(Xi −X)(Yi − Y ))

= σ2

n+X

2Var(β1)− 2Xn∑ni=1(Xi −X)2

n∑i=1

(Xi −X)Cov(Yi, Yi)

= σ2

n+X

2Var(β1)− 2Xσ2

n∑ni=1(Xi −X)2

n∑i=1

(Xi −X)︸ ︷︷ ︸=0

= σ2

n+X

2Var(β1)

= σ2

n+ σ2X

2

n · s2X

=σ2(s2

X +X2)

n · s2X

=σ2∑n

i=1 X2i

n2 · s2X

.

(b) (6 pts.)

n∑i=1

εi =n∑i=1

(Yi − (β0 + β1Xi)

)=

n∑i=1

(Yi − (Y − β1X)− β1Xi

)=

n∑i=1

(Yi − Y ) +n∑i=1

(β1X − β1Xi)

= (nY − nY ) + (nβ1X − nβ1X)= 0 + 0= 0

2

Page 3: 36-401 Modern Regression HW #2 Solutions - CMU …larry/=stat401/HW2sol.pdf36-401 Modern Regression HW #2 Solutions DUE: 9/15/2017 Problem 1 [36 points total] (a) (12 pts.)

(c) (12 pts.)

n∑i=1

Yiεi =n∑i=1

(β0 + β1Xi)εi

= β0

n∑i=1

εi︸ ︷︷ ︸= 0

+β1

n∑i=1

Xiεi

= β1

n∑i=1

Xiεi

= β1

n∑i=1

Xiεi − β1X

n∑i=1

εi︸ ︷︷ ︸= 0

= β1

n∑i=1

(Xi −X)εi

= β1

n∑i=1

(Xi −X)(Yi − (β0 + β1Xi)

)= β1

n∑i=1

(Xi −X)(Yi − (Y − β1X)− β1Xi

)= β1

n∑i=1

(Xi −X)(

(Yi − Y )− β1(Xi −X))

= β1

n∑i=1

(Xi −X)(Yi − Y )− β21

n∑i=1

(Xi −X)2

= β1 · n · cXY − β21 · n · s2

X

= β1 · n · cXY − β1 ·cXYs2X

· n · s2X

= 0

Note: The above implies

1n

n∑i=1

(Yi −

1n

n∑j=1

Yj

)(εi −

1n

n∑j=1

εi

)= 1n

n∑i=1

(Yi −

1n

n∑j=1

Yj

)· εi

= 1n

n∑i=1

Yiεi −1n2

n∑j=1

Yj

n∑i=1

εi

= 1n

n∑i=1

Yiεi︸ ︷︷ ︸= 0

(1n

n∑j=1

Yj

)(1n

n∑i=1

εi

)︸ ︷︷ ︸

= 0

= 0.

Linear Algebra interpretation: The observed residuals are orthogonal to the fitted values.

Statistical interpretation: The observed residuals are linearly uncorrelated with the fitted values.

3

Page 4: 36-401 Modern Regression HW #2 Solutions - CMU …larry/=stat401/HW2sol.pdf36-401 Modern Regression HW #2 Solutions DUE: 9/15/2017 Problem 1 [36 points total] (a) (12 pts.)

(d) (6 pts.)

From the result in part (c) we have β1 = 0.

Substituting this into the equation for β0, we obtain the intercept

β0 = Y − β11n

n∑i=1

εi

= Y − 0 · 0= Y .

4

Page 5: 36-401 Modern Regression HW #2 Solutions - CMU …larry/=stat401/HW2sol.pdf36-401 Modern Regression HW #2 Solutions DUE: 9/15/2017 Problem 1 [36 points total] (a) (12 pts.)

Problem 2 [24 points]

(a) (8 pts.)

We compute the least squares estimate β1 by minimizing the empirical mean squared error via a 1st derivativetest.

∂β1MSE(β1) = ∂

∂β1

(1n

n∑i=1

(Yi − β1Xi)2

)

= 2n

n∑i=1

(Yi − β1Xi)(−Xi)

Setting the derivative equal to 0 yields

− 2n

n∑i=1

(Yi − β1Xi)(Xi) = 0

n∑i=1

(YiXi − β1X2i ) = 0

n∑i=1

YiXi − β1

n∑i=1

X2i = 0

=⇒ β1 =∑ni=1 YiXi∑ni=1 X

2i

.

Furthermore,∂2

∂β21MSE(β1) = ∂

∂β1

(− 2n

n∑i=1

(YiXi − β1X2i ))

= 2n

n∑i=1

X2i

> 0

,

so β1 is indeed the minimizer of the empirical MSE.

5

Page 6: 36-401 Modern Regression HW #2 Solutions - CMU …larry/=stat401/HW2sol.pdf36-401 Modern Regression HW #2 Solutions DUE: 9/15/2017 Problem 1 [36 points total] (a) (12 pts.)

(b) (8 pts.)

E[β1]

= E

[∑ni=1 YiXi∑ni=1 X

2i

]

= E

[∑ni=1 Xi(β1Xi + εi)∑n

i=1 X2i

]

= E

[β1∑ni=1 X

2i +

∑ni=1 Xiεi∑n

i=1 X2i

]

= E

[β1 +

∑ni=1 Xiεi∑ni=1 X

2i

]

= β1 + 1∑ni=1 X

2i

E

[n∑i=1

Xiεi

]

= β1 + 1∑ni=1 X

2i

n∑i=1

Xi · E[εi]︸︷︷︸=0

= β1

Thus, if the true model is linear and through the origin, then β1 is an unbiased estimator for β1.

(c) (8 pts.)

If the true model is linear, but not necessarily through the origin, then the bias of the regression-through-the-origin estimator β1 is

Bias(β1) = E[β1]− β1

= E

[∑ni=1 YiXi∑ni=1 X

2i

]− β1

= E

[∑ni=1 Xi(β0 + β1Xi + εi)∑n

i=1 X2i

]− β1

= E

[β0∑ni=1 Xi + β1

∑ni=1 X

2i +

∑ni=1 Xiεi)∑n

i=1 X2i

]− β1

= E

[β1 +

β0∑ni=1 Xi +

∑ni=1 Xiεi∑n

i=1 X2i

]− β1

= β1 +β0∑ni=1 Xi∑n

i=1 X2i

+ 1∑ni=1 X

2i

n∑i=1

Xi · E[εi]︸︷︷︸=0

−β1

=β0∑ni=1 Xi∑n

i=1 X2i

.

6

Page 7: 36-401 Modern Regression HW #2 Solutions - CMU …larry/=stat401/HW2sol.pdf36-401 Modern Regression HW #2 Solutions DUE: 9/15/2017 Problem 1 [36 points total] (a) (12 pts.)

Problem 3 [20 points total]

(a) (5 pts.)

set.seed(1)n <- 100X <- runif(n, 0, 1)Y <- 5 + 3 * X + rnorm(n, 0, 1)

plot(X,Y, cex = 0.75)model <- lm(Y ~ X)abline(model, lwd = 2, col = "red")

0.0 0.2 0.4 0.6 0.8 1.0

45

67

89

X

Y

Figure 1: One hundred data points with the simple linear regression fit

(b) (5 pts.)

n <- 100betas <- rep(NA,1,1000)

for (itr in 1:1000){X <- runif(n, 0, 1)Y <- 5 + 3 * X + rnorm(n, 0, 1)model <- lm(Y ~ X)betas[itr] <- model$coefficients[2]

}

7

Page 8: 36-401 Modern Regression HW #2 Solutions - CMU …larry/=stat401/HW2sol.pdf36-401 Modern Regression HW #2 Solutions DUE: 9/15/2017 Problem 1 [36 points total] (a) (12 pts.)

mean(betas)

## [1] 3.019629

Since 1000 is a reasonably large number of trials we expect the mean of β(1)1 , . . . , β

(1000)1 to be close to

E[β1] = E

[E[β1 | X1, . . . , Xn

]]= E

[β1]

= E[3]= 3.

In the above experiment, we have1

1000

1000∑i=1

β(i)1 = 3.019629.

hist(betas, xlab = expression(hat(beta)[1]), prob = FALSE, main = "", breaks = 50)

β1

Fre

quen

cy

2.0 2.5 3.0 3.5 4.0

010

2030

4050

60

Figure 2: Histogram of linear regression slope parameters for Gaussian data

(c) (5 pts.)

n <- 100betas <- rep(NA,1,1000)

for (itr in 1:1000){X <- runif(n, 0, 1)Y <- 5 + 3 * X + rcauchy(n, 0, 1)

8

Page 9: 36-401 Modern Regression HW #2 Solutions - CMU …larry/=stat401/HW2sol.pdf36-401 Modern Regression HW #2 Solutions DUE: 9/15/2017 Problem 1 [36 points total] (a) (12 pts.)

model <- lm(Y ~ X)betas[itr] <- model$coefficients[2]

}

par(mfrow = c(1,2))hist(betas, xlab = expression(hat(beta)[1]), prob = FALSE, main = "", xlim = c(3-20,3+20),

breaks = 750)abline(v = 3, col = "red", lwd = 2)hist(betas, xlab = expression(hat(beta)[1]), prob = FALSE, main = "", breaks = 200)

β1

Fre

quen

cy

−10 0 10 20

020

4060

8010

0

β1

Fre

quen

cy

−600 −400 −200 0 200

010

020

030

040

0

Figure 3: Histogram of linear regression slope parameters for Cauchy data (Left: restricted to the window(-17,23). Right: The full window.)

Notice that the distribution of β(1)1 , . . . , β

(1000)1 still seems to be approximately centered around β1 = 3, but

the tails are now much fatter. In particular, from the plot on the right, we see that at least one trial of theexperiment resulted in a value around β ≈ −600.

(d) (5 pts.)

set.seed(1)n <- 100X <- runif(n, 0, 1)W <- X + rnorm(n, 0, sqrt(2))Y <- 5 + 3 * X + rnorm(n, 0, 1)

plot(X,Y, cex = 0.75)model <- lm(Y ~ W)abline(model, lwd = 2, col = "red")

n <- 100betas <- rep(NA,1,1000)

for (itr in 1:1000){X <- runif(n, 0, 1)W <- X + rnorm(n, 0, sqrt(2))

9

Page 10: 36-401 Modern Regression HW #2 Solutions - CMU …larry/=stat401/HW2sol.pdf36-401 Modern Regression HW #2 Solutions DUE: 9/15/2017 Problem 1 [36 points total] (a) (12 pts.)

0.0 0.2 0.4 0.6 0.8 1.0

34

56

78

910

X

Y

Figure 4: One hundred observations of Y vs. X with the simple linear regression fit of Y on W

Y <- 5 + 3 * X + rnorm(n, 0, 1)model <- lm(Y ~ W)betas[itr] <- model$coefficients[2]

}

mean(betas)

## [1] 0.1198059hist(betas, xlab = expression(hat(beta)[1]), prob = FALSE, main = "", breaks = 50)

In the above experiment, we have1

1000

1000∑i=1

β(i)1 = 0.06132475.

From this, and Figure 5, we conclude having errors on the Xi’s biases β1 downwards.

10

Page 11: 36-401 Modern Regression HW #2 Solutions - CMU …larry/=stat401/HW2sol.pdf36-401 Modern Regression HW #2 Solutions DUE: 9/15/2017 Problem 1 [36 points total] (a) (12 pts.)

β1

Fre

quen

cy

−0.2 −0.1 0.0 0.1 0.2 0.3 0.4

010

2030

4050

Figure 5: Histogram of linear regression slope parameters for data with errors on the X’s

11

Page 12: 36-401 Modern Regression HW #2 Solutions - CMU …larry/=stat401/HW2sol.pdf36-401 Modern Regression HW #2 Solutions DUE: 9/15/2017 Problem 1 [36 points total] (a) (12 pts.)

Problem 4 [20 points total]

data(airquality)

(a) (5 pts.)

summary(airquality)

## Ozone Solar.R Wind Temp## Min. : 1.00 Min. : 7.0 Min. : 1.700 Min. :56.00## 1st Qu.: 18.00 1st Qu.:115.8 1st Qu.: 7.400 1st Qu.:72.00## Median : 31.50 Median :205.0 Median : 9.700 Median :79.00## Mean : 42.13 Mean :185.9 Mean : 9.958 Mean :77.88## 3rd Qu.: 63.25 3rd Qu.:258.8 3rd Qu.:11.500 3rd Qu.:85.00## Max. :168.00 Max. :334.0 Max. :20.700 Max. :97.00## NA's :37 NA's :7## Month Day## Min. :5.000 Min. : 1.0## 1st Qu.:6.000 1st Qu.: 8.0## Median :7.000 Median :16.0## Mean :6.993 Mean :15.8## 3rd Qu.:8.000 3rd Qu.:23.0## Max. :9.000 Max. :31.0##pairs(airquality, cex = 0.5)

(b) (5 pts.)

with(airquality, plot(Solar.R, Ozone, xlab = "Solar Radiation", ylab = "Ozone"))model <- lm(Ozone ~ Solar.R, data = airquality)abline(model, col = "red", lwd = 2)

Ozone and Solar Radiation appear to be positively correlated.

(c) (5 pts.)

summary(model)$coefficients

## Estimate Std. Error t value Pr(>|t|)## (Intercept) 18.5987278 6.74790416 2.756223 0.0068560215## Solar.R 0.1271653 0.03277629 3.879795 0.0001793109

The intercept and slope of the least squares regression are

β0 = 18.59873 and β1 = 0.12717

12

Page 13: 36-401 Modern Regression HW #2 Solutions - CMU …larry/=stat401/HW2sol.pdf36-401 Modern Regression HW #2 Solutions DUE: 9/15/2017 Problem 1 [36 points total] (a) (12 pts.)

Ozone

0 100 250 60 80 0 10 20 30

050

150

010

025

0

Solar.R

Wind

510

20

6080 Temp

Month

56

78

9

0 50 150

010

2030

5 10 20 5 6 7 8 9

Day

Figure 6: Pairwise relationships of variables in the airquality data set

13

Page 14: 36-401 Modern Regression HW #2 Solutions - CMU …larry/=stat401/HW2sol.pdf36-401 Modern Regression HW #2 Solutions DUE: 9/15/2017 Problem 1 [36 points total] (a) (12 pts.)

0 50 100 150 200 250 300

050

100

150

Solar Radiation

Ozo

ne

Figure 7: Ozone vs. solar radiation observations in the **airquality** data set

14

Page 15: 36-401 Modern Regression HW #2 Solutions - CMU …larry/=stat401/HW2sol.pdf36-401 Modern Regression HW #2 Solutions DUE: 9/15/2017 Problem 1 [36 points total] (a) (12 pts.)

0 50 100 150 200 250 300

−50

050

100

Solar Radiation

Res

idua

ls

Figure 8: Linear regression residuals vs. solar radiation

(d) (5 pts.)

resids <- airquality$Ozone - predict(model, newdata = data.frame(Solar.R = airquality$Solar.R))plot(airquality$Solar.R, resids, xlab = "Solar Radiation", ylab = "Residuals")abline(h = 0)

No, the standard regression assumptions do not hold. The residuals are not symmetric about zero so thelinear functional form assumption is not suitable. Furthermore, the residuals are highly heteroskedastic.

15