Solution to Problem set # 2 - Cornell University · PDF fileCornell University Department of...
Click here to load reader
-
Upload
truongmien -
Category
Documents
-
view
212 -
download
0
Transcript of Solution to Problem set # 2 - Cornell University · PDF fileCornell University Department of...
Cornell UniversityDepartment of Economics
Econ 620Instructor: Prof. Kiefer
Solution to Problem set # 2
1)a) y = x
α
Hence, 1y = α− β
x . Therefore, let y’= 1y and let x′ = − 1
x .Then, y’= α+βx′.
b) y = eα+βx
1+eα+βx
Hence, 1y = 1 + e−α−βx. Then, ln( 1
y − 1) = ln(1−yy ) = −α − βx. Therefore,
let y’= ln( y1−y ) and we get that y’=α + βx.
2)
Unbiasedness implies that c1 + c2 = 1 since E(b) = (c1 + c2)β.Therefore, the problem consists of minimizing V ar(b) = c2
1v1 + c22v2 subject
to c1 + c2 = 1.Second order conditions will be met, since the objective funcion is quadratic
and the restriction is linear and from the first order conditions we can concludethat c1 = v2
v1+v2and c2 = v1
v1+v2.
3)First of all, note that
Z ′Z =[
y′
X ′
] [y X
]=
[y′y y′XX ′y X ′X
]
Now, the OLS estimate is given by;
β̂ = (X ′X)−1X ′y =
[20 00 75
]−1 [1025
]=
[1213
]
To find s2, we have to find e′e;
e′e =(y − Xβ̂
)′ (y − Xβ̂
)= y′y − β̂′X ′Xβ̂
= [100]− [12
13
] [20 00 75
] [1213
]=
2603
= 86. 667
Then, from the formula for s2;
s2 =e′e
N − k=
2603
(20 − 2)=
13027
= 4. 8148
1
Finally, we have
R2 = 1 − e′ey′Ay
Note that
y′Ay = y′[I − 1 (1′1)−1 1′
]y = y′y − 1
Ny′11′y
= y′y − Ny2 = 100 − 20 ×(
1020
)2
= 95
Then,
R2 = 1 −2603
95= 0.0877
The full data set is a (20 × 3) matrix whose first column is the vector of ob-servations on the dependent variable and the remaining columns are the vectorof ones corresponding to the constant term and vector of observations on theindependent variable. On the other hand, the cross product matrix given in thequestion is simply a (3 × 3) matrix. Still, we can extract every bit of informationwe need from the cross product matrix only. We do not lose anything just bylooking at the cross product matrix. In other words, the cross product matrixis a sufficient statistic for the regression model.
4)
(Corr(ˆy,y))2 =
[∑i(yi−y)
(ˆyi−
ˆy
)]2
∑i(yi−y)2
∑i
(ˆyi−
ˆy
)2 = y′Aˆy y′Aˆ
y
y′Ayˆy′A
ˆy
= y′Aˆy
y′Ay = R2
The second equality comes from the fact that A = I − 1 (1′1)−1 1′ . The
third equality follows because y′Aˆy = (
ˆy + e)′A
ˆy =
ˆ
y′Aˆy + e′A
ˆy =
ˆ
y′Aˆy +
e′ˆy =
ˆ
y′Aˆy since e′
ˆy = 0.
5)
We know that β̂yx =∑ n
i=1(yi−y)(xi−x)∑ni=1(xi−x)2
and β̂xy =∑n
i=1(yi−y)(xi−x)∑ ni=1(yi−y)2
.
a) β̂yx = 1/β̂xy if∑n
i=1 (xi − x)2 .∑n
i=1 (yi − y)2 =[∑n
i=1 (yi − y) (xi − x)]2 .This will happen only when R2 = 1. To see why, go to part c).
b) The sign is determined by the numerator and hence both will have thesame sign.
c) β̂yxβ̂xy =∑ n
i=1(yi−y)(xi−x)∑ni=1(xi−x)2
∑ ni=1(yi−y)(xi−x)∑
ni=1(yi−y)2
= [∑ni=1(yi−y)(xi−x)]2∑
ni=1(xi−x)2
∑ni=1(yi−y)2
=
r2
d) Since both estimators have the same sign, it follows from c) that | β̂yx ||β̂xy |≤ 1. Hence, dividing by | β̂xy |, we have that | β̂yx |≤ 1
|β̂xy| . That is, the
2
absolute value of the estimator of the slope in the regression of y on x is smallerthan that of the regression of x on y.
For the numerical part, β̂yx = 22.13−200(20.72/200)(11.34/200)12.16−200(11.34/200)2 = 20.955
11.51 = 1.81948
β̂xy = 20.95584.96−200(20.72/200)2 = 20.955
82.8134 = 0.253 and r2 = [20.955]2
11.51 82.8134 =
0.4603 = β̂yxβ̂xy and 1.81948 = β̂yx ≤ 1/β̂xy = 3.9526
6)a)E[b| X ] = β + (X ′X)−1X ′E[ε | X ] = β since E[ε | X ] = 0.Therefore, E[b]=E(E[b| X ]) =E(β) = β.Therefore, if the regressors are stochastic, as long as tthey are uncorrelated
with the error term (and all other assumptions hold), the OLS estimator of βis still unbiased.
b)V ar(b) = EX [V ar(b | X)] + V arXE[b | X ]Since EX [V ar(b | X)] = EX [V ar((X ′X)−1X ′ε | X)] = EX [(X ′X)−1X ′V ar(ε |
X)X(X ′X)−1] == EX [(X ′X)−1X ′σ2IX(X ′X)−1] = σ2EX [(X ′X)−1] and also, V arXE[b |
X ] = V arX(β) = 0, it follows,V ar(b) = σ2E[(X ′X)−1]
3