ECONF241 GaussMarkov Theorem
-
Upload
hardik-gupta -
Category
Documents
-
view
214 -
download
0
Embed Size (px)
description
Transcript of ECONF241 GaussMarkov Theorem
-
1
Gauss-Markov Theorem
-
2
1. Yi = 1 + 2Xi + ui 2. E(u i) = 0 E(Yi) = 1 +2Xi 3. cov(ui, Xi) =0
4. cov(u i, u j) = cov(Yi,Yj) = 0
5. var(ui) = u2 < == > var(Yi) = y
2
6. XK Xm, Xi not constant for every observation
7. ui~N(0, 2) Yi~N(1+2Xi,
2)
Properties of the Linear Regression Model
-
3 The Sampling Distributions of the
Regression Estimators
The population parameters 1 and 2are
unknown population constants.
The formulas that produce the sample estimates
of 1 and 2 are called the estimators of 1 and
2.
Theoretical: Yi = 1 + 2Xi + ui
ii XY 2
1 Estimated:
-
4
Estimators are Random Variables
If the least squares estimators 1 and 2 are random variables, then what are their means, variances, covariances and probability distributions?
^ ^
The estimators, 1 and 2 are random variables because they are different from sample to sample
^ ^
-
5
The Expected Values of 0 and 1
The least squares estimators in the simple regression
2 = nXiYi - XiYi
nXi -(Xi) 2 2
= xiyi
xi2
^
^ ^
where y = yi / n and x = xi / n
1 = Y - 2X ^ ^
-
6 Substitute Yi = 1 + 2Xi + u i Into 1 formula to get:
^
22 )(
)21()21(2
ii
iiiiii
XXn
uXXuXXn
22
22
)(
))(21()2(2
ii
iiiiiiii
XXn
uXXXnuXnXnXn
22
22
)(
))(222
ii
iiiiii
XXn
uXXuXnXn
22
22
)(
)(])([22
ii
iiiiii
XXn
uXuXnXXn
-
7
2 = 2 + nxi ui - xi ui
nxi -(xi) 2 2
^
The mean of 2 is:
E(2) = 2 + nxiE(ui)- xi E(ui)
nxi -(xi) 2 2
^
^
Since E(ui) = 0, then E(2) = 2 . ^
=0
-
8 An Unbiased Estimator
Unbiasedness: The mean of the distribution of sample
estimates is equal to the parameter to be estimated.
The result E(2) = 2 means that
the distribution of 2 is centered at 2.
^
^
Since the distribution of 2
is centered at the true 2 ,we say that
2 is an unbiased estimator of 2.
^
^
-
9
The unbiasedness result on the
previous slide assumes that we
are using the correct model
Wrong Model Specification
For example:
Y = 1 + 2X1 + (3X2 +u) = 3X2 +u
E(ui) 0
If the model is of the wrong form
or is missing important variables,
then E(ui)= 0, then E(2) = 2 ^
-
10
Unbiased Estimator of the Intercept
In a similar manner, the estimator 1
of the intercept or constant term can be
shown to be an unbiased estimator of 1 when the model is correctly specified.
^
E(1) = 1 ^
-
11
^
u2 ^
Var(i) ^
-
12
2 = (Xi X )Yi Y )
xi x ) 2
= xiyi
xi2
= 590.12
92.5 = 6.38
1 = Y - 1X = 169.4 (6.38*10.35) = 103.4
Y X y x x2 xy X2
140 5 -29.40 -5.35 28.62 157.29 25
157 9 -12.40 -1.35 1.82 16.74 81
205 13 35.60 2.65 7.02 94.34 169
162 10 -7.40 -0.35 0.12 2.59 100
174 11 4.6 0.65 0.42 2.99 121
165 9 -4.4 -1.35 1.82 5.94 81
Sum 3388 207 0 0 92.5 590.2 2235
mean 169.4 10.35
-
13
Variance of 2
Se(2)= (8.50)2/92.55 = 0.7809 = 0.8836
^
2 is a function of the Yi values, but
var(2) does not involve Yi directly.
^
^
^
Given that both Yi and ui have variance 2,
the variance of the estimator 1 is:
xi x
2
2 var(2) = =
xi 2
2
^ ^ ^
^
-
14
Variance of 1 ^
Given 1 = Y 2X ^ ^
the variance of the estimator 1 is:
var(1)= 2
n x i x 2
x i 2
nxi 2
x i 2
2
^
^
Se(1)= (8.50)2(2235/20(92.55)) = 87.238 = 9.34 ^
-
15
If x = 0, slope can change without affecting
the variance.
Covariance of 1 and 2 ^ ^
cov(1,2)= 2
xi 2
x
x t x 2
x = 2
^ ^
-
16 Estimating the variance
of the error term, 2
ui ^
i =1
T 2
n 2 = ^
is an unbiased estimator of 2 ^
Smaller variance higher efficiency
N variances
Var(ui|Xi) variances of betas
ui = Yi 1 2Xi ^ ^ ^
-
17
1. 2: When uncertainty about Yt values
uncertainty about 1, 2 and their relationship.
2. The more spread out the Xi values,
the more confidence in 1, 2
3. The larger the sample size, N,
the smaller the variances and covariances.
4. When the (squared) Xt values are far from zero (in
either +/- direction), The larger variance of i.
5. Changing the slope, 2, has no effect on the intercept,
1, when the sample mean is zero.
But if sample mean is positive, the covariance between
1and 2 will be negative, and vice versa.
What factors determine variance and covariance ?
^ ^
^ ^
^
^
^
^ ^
-
18
Gauss-Markov Theorem
Note: Gauss-Markov Theorem doesnt
apply to non-linear estimators
Given the assumptions of classical linear
regression model, the ordinary least
squares (OLS) estimators 0 and 1
are the best linear unbiased estimators
(BLUE) of 1 and 2. This means that 1 and
2 have the smallest variance of all linear
unbiased estimator of 1 and 2.
^ ^
^ ^
-
19
B: Best (minimum variance, efficiency)
L: Linear (linear in Yis)
U: Unbiased ( E( ) = k )
E: Estimator (a formula, a functional form)
k
OLS estimators are BLUE.
Efficiency (B) and unbiasedness (U) are
desirable properties of estimators.
3. The Gauss-Markov Theorem
-
20
Prob.
(1)
True value
of 2
^
Unbiased
E(2)=
E(2)
^
^
E(2)>
Biased overestimate
E(2)
^
^
E(2)
-
21 Probability Distribution
of Least Squares Estimators
1~ N 1 , nx i2
u2 x i
2
^
2 ~ N 2 , x i2 u
2 ^
-
22
: An estimator is more efficient
than another if it has a smaller variance.
Efficiency
Prob.
(b1)
True value of 2 2
1 2
1 is more efficient
than 2.
E(2) ^
-
23 Consistency:
k is a consistent estimator of k means that as the
sample size gets larger, the k will become more
accurately.
^
^
Prob.
(1)
True value of 2
N=5
0
N=100
N=500
E(1) ^
^
-
24 Summary of BLUE estimators
Mean
E(1)=1 E(2)=2 and ^ ^
Variance
nxi 2
x i 2
Var(1)= 2 and
xi 2
2 Var(2)=
^ ^
Standard error or standard deviation of estimator K
Se(k) = var(k) ^ ^
-
25
Estimated Error Variance
ui ^
i =1
T 2
n-K-1 u
= ^
Standard Error of Regression (Estimate), SEE
ui ^
i =1
T 2
n K-1
u = ^ u =
^ K # of independent
excludes
the Constant term
^
E(u2) = u
2