Econ 329 - Statistical Properties of the OLS estimatorinside.bard.edu/~desilva/econ329/Gauss...

12
Econ 329 - Statistical Properties of the OLS estimator Sanjaya DeSilva September 22, 2008

Transcript of Econ 329 - Statistical Properties of the OLS estimatorinside.bard.edu/~desilva/econ329/Gauss...

Page 1: Econ 329 - Statistical Properties of the OLS estimatorinside.bard.edu/~desilva/econ329/Gauss Markov.pdf · Econ 329 - Statistical Properties of the OLS estimator Sanjaya DeSilva September

Econ 329 - Statistical Properties of the OLS

estimator

Sanjaya DeSilva

September 22, 2008

Page 2: Econ 329 - Statistical Properties of the OLS estimatorinside.bard.edu/~desilva/econ329/Gauss Markov.pdf · Econ 329 - Statistical Properties of the OLS estimator Sanjaya DeSilva September

1 Overview

Recall that the true regression model is

Yi = β0 + β1Xi + ui (1)

Applying the OLS method to a sample of data, we estimate the sample

regression function

Yi = b0 + b1Xi + ei (2)

where the OLS estimators are,

b1 =

∑ni=1 xiyi∑ni=1 x2

i

b0 = Y − b1X

2 Unbiasedness

The OLS estimate b1 is simply a sample estimate of the population parameter

β1. For every random sample we draw from the population, we will get a

different b1. What then is the relationship between the b1 we obtain from a

random sample and the underlying β1 of the population?

To see this, start by rewriting the OLS estimator as follows;

b1 =

∑ni=1 xiyi∑ni=1 x2

i

(3)

=1∑n

i=1 x2i

n∑i=1

xi(Yi − Y ) (4)

=1∑n

i=1 x2i

(n∑

i=1

xiYi −n∑

i=1

xiY ) (5)

1

Page 3: Econ 329 - Statistical Properties of the OLS estimatorinside.bard.edu/~desilva/econ329/Gauss Markov.pdf · Econ 329 - Statistical Properties of the OLS estimator Sanjaya DeSilva September

=1∑n

i=1 x2i

(n∑

i=1

xiYi − Yn∑

i=1

xi) (6)

=1∑n

i=1 x2i

(n∑

i=1

xiYi) (7)

For Yi, we can substitute the expression for the true regression line in

order to obtain a relationship between b1 and β1.

b1 =1∑n

i=1 x2i

(n∑

i=1

xi(β0 + β1Xi + ui)) (8)

=1∑n

i=1 x2i

(β0

n∑i=1

xi + β1

n∑i=1

xiXi +n∑

i=1

xiui) (9)

= β1 +n∑

i=1

kiui (10)

where

ki =xi∑n

i=1 x2i

(11)

From this expression, we see that b1 and β1 are in fact different. How-

ever, we can demonstrate that, under certain assumptions, the average b1 in

repeated sampling would equal β1. To see this, take the expectation of both

sides of the above expression,

E(b1) = β1 + E(n∑

i=1

kiui) (12)

If we assume that Xi, and therefore ki is non-stochastic, we can rewrite

this as

E(b1) = β1 +n∑

i=1

kiE(ui) (13)

2

Page 4: Econ 329 - Statistical Properties of the OLS estimatorinside.bard.edu/~desilva/econ329/Gauss Markov.pdf · Econ 329 - Statistical Properties of the OLS estimator Sanjaya DeSilva September

If we also assume that E(ui) = 0, we get

E(b1) = β1 (14)

When the expectation of the sample estimate equals the true parameter,

we say that the estimator is unbiased. To recap, we find that if Xi is non-

stochastic and E(ui) = 0, the OLS estimator is unbiased.

However, note that these two conditions are not necessary for unbiased-

ness. Suppose ki is not stochastic. Then, b1 is an unbiased if,

E(n∑

i=1

kiui) = (n∑

i=1

E(kiui)) = 0 (15)

That is, if X and u are uncorrelated, the OLS estimator is unbiased.

3 Variance of the Coefficient Estimate

The variance of the b1 sampling distribution is, by definition

V ar(b1) = E[b1 − E(b1)]2 (16)

We showed in the previous section that, under certain classical assump-

tions, E(b1) = β1.

Then,

V ar(b1) = E[b1 − β1]2 = E[

n∑i=1

kiui]2 (17)

Expanding terms, we get

V ar(b1) = E[n∑

i=1

k2i u

2i +

n∑i=1

∑j 6=i

2kikjuiuj] (18)

=n∑

i=1

k2i E(u2

i ) +n∑

i=1

∑j 6=i

2kikjE(uiuj) (19)

3

Page 5: Econ 329 - Statistical Properties of the OLS estimatorinside.bard.edu/~desilva/econ329/Gauss Markov.pdf · Econ 329 - Statistical Properties of the OLS estimator Sanjaya DeSilva September

If we make the following two additional assumptions,

1. The variance of the error term is constant, i.e. V ar(ui) = E[u2i ] = σ2

2. The error terms of different observations are not correlated with each

other or the covariance between all error terms is zero, i.e. E(uiuj) = 0

for all i 6= j

The expression for the variance of b1 reduces to the following elegant form,

V ar(b1) =n∑

i=1

k2i σ

2 =σ2∑ni=1 x2

i

(20)

Note that the variance of the slope coefficient depends on two things.

Variance of the slope coefficient increases as

1. The variance of the error term increases

2. The sum of squared variation in the independent variable decreases,

i.e. the X variable is clustered around the mean.

3.1 Estimate of The Variance of the Error Term

Even though the above expression is elegant, it is impossible to compute

the variance of the slope estimate because we don’t know the variance of

the underlying error term. We get around this problem by estimating the

variance of the error term, i.e. σ2 using the residuals obtained from OLS. It

can be shown that, under certain classical assumptions,

σ̂2 =

∑ni=1 e2

i

n− 2(21)

4

Page 6: Econ 329 - Statistical Properties of the OLS estimatorinside.bard.edu/~desilva/econ329/Gauss Markov.pdf · Econ 329 - Statistical Properties of the OLS estimator Sanjaya DeSilva September

is an unbiased estimator of σ2, i.e.

E[σ̂2] = E[

∑ni=1 e2

i

n− 2] = σ2 (22)

.

For the formal proof, see Gujarati Appendix. Note that this proof also

depends crucially on the classical assumptions.

Note that the denominator of this unbiased estimator is the SSR. The

estimator itself is ofren called the Mean Square Residual. The square root

of the estimator is called the standard error of the regression (SER) and is

typically used as an estimate of the standard deviation of the error term.

4 The Efficiency of the OLS estimator

Under the classical assumptions, the OLS estimator b1 can be written as a

linear function of Y;

b1 =∑

kiYi (23)

where

ki =xi∑x2

i

(24)

Our goal now is to show that this OLS estimator has a lower variance

than any other linear estimator, i.e. the OLS estimator is efficient or best.

To do so, consider any other linear unbiased estimator,

b∗1 =∑

wiYi (25)

where wi is some other function of the two variables.

5

Page 7: Econ 329 - Statistical Properties of the OLS estimatorinside.bard.edu/~desilva/econ329/Gauss Markov.pdf · Econ 329 - Statistical Properties of the OLS estimator Sanjaya DeSilva September

The expected value of this estimator is,

E(b∗1) =∑

wiE(Yi) = β0

∑wi + β1

∑wiXi (26)

Because b∗1 is unbiased,

E(b∗1) = β1 (27)

For this to be the case, it follows that,

∑wi = 0 (28)∑

wiXi = 1 (29)

It follows from these two identities that,

∑wixi =

∑wi(Xi −X) =

∑wiXi −X

∑wi = 1 (30)

The variance of b∗1 is

V ar(b∗1) = V ar(∑

wiYi) =∑

w2i V ar(Yi) = σ2

∑w2

i (31)

If we rewrite the variance as,

V ar(b∗1) = σ2∑

(wi − ki + ki)2 (32)

and expand this expression

V ar(b∗1) = σ2(∑

(wi − ki)2 +

∑k2

i + 2∑

ki(wi − ki))

(33)

Note that, ∑kiwi =

∑xiwi∑x2

i

=1∑x2

i

(34)

under the unbiasedness assumption made earlier.

6

Page 8: Econ 329 - Statistical Properties of the OLS estimatorinside.bard.edu/~desilva/econ329/Gauss Markov.pdf · Econ 329 - Statistical Properties of the OLS estimator Sanjaya DeSilva September

In addition, ∑k2

i =

∑x2

i∑x2

i

= 1 (35)

Therefore, the variance of b∗1 simplifies to,

V ar(b∗1) = σ2(∑

(wi − ki)2 +

∑k2

i

)(36)

This expression is minimized when

wi = ki (37)

and the minimum variance is,

V ar(b∗1) = σ2∑

k2i (38)

This completes the proof that, under the classical assumptions, the OLS

estimator is has the least variance among all linear unbiased estimators.

4.1 Consistency

We established that the OLS estimator is unbiased and efficient under classi-

cal assumptions. We can also show easily that the OLS estimator is consistent

under the same assumption.

An estimator is consistent if its variances reaches zero as the sample size

increases. In order to see this, start with the expression for the variance,

V ar(b1) =σ2∑x2

i

(39)

Divide both the denominator and numerator by n.

V ar(b1) =σ2/n∑x2

i /n(40)

7

Page 9: Econ 329 - Statistical Properties of the OLS estimatorinside.bard.edu/~desilva/econ329/Gauss Markov.pdf · Econ 329 - Statistical Properties of the OLS estimator Sanjaya DeSilva September

As n → ∞, the numerator approaches zero whereas the denominator

remains positive. Therefore,

limn→∞

V ar(b1) = 0 (41)

5 Gauss-Markov Theorem and Classical As-

sumptions

To recap, we have demonstrated that the OLS estimator,

b1 =

∑xiyi∑x2

i

=∑

kiYi (42)

has the following properties;

1. Unbiased, i.e.

E(b1) = β1 (43)

2.

V ar(b1) =σ2∑x2

i

= σ2∑

k2i (44)

3. Best or efficient, i.e. has lower variance than any other linear unbiased

estimator, i.e.

V ar(b1) < V ar(b∗1) (45)

where b∗1 =∑

wiYi and wi is any other function of xi.

4. Consistent, i.e.

limn→∞

V ar(b1) = 0 (46)

8

Page 10: Econ 329 - Statistical Properties of the OLS estimatorinside.bard.edu/~desilva/econ329/Gauss Markov.pdf · Econ 329 - Statistical Properties of the OLS estimator Sanjaya DeSilva September

if the following classical assumptions are satisfied,

1. The underlying regression model is linear in parameters, has an additive

error and is correctly specified, i.e.

Yi = β0 + β1f(Xi) + ui (47)

2. The X variable is non-stochastic, i.e. fixed in repeated sampling.

3. The expected value of the error term is zero, i.e.

E(ui) = 0 (48)

Note that the intercept term, β0 ensures that this condition is met.

Consider

Yi = β0 + β1Xi + ui (49)

E(ui) = k (50)

This is equivalent to a model where

Yi = β∗0 + β1Xi + u∗i (51)

β∗0 = β0 + 3 (52)

E(ui) = 0 (53)

Note also that the first three conditions are sufficient for OLS to be

unbiased.

4. The explanatory variable, Xi is uncorrelated with the error term ui,

i.e.

Cor(X, e) = E[xiui] = 0 (54)

9

Page 11: Econ 329 - Statistical Properties of the OLS estimatorinside.bard.edu/~desilva/econ329/Gauss Markov.pdf · Econ 329 - Statistical Properties of the OLS estimator Sanjaya DeSilva September

Note that this assumption is necessary for OLS to be unbiased. Even

if xi is non-stochastic, we can obtain unbiased coefficients if xi is un-

correlated with the error term. Such correlation occurs typically if Xi

is endogenous, i.e. determined by other variables. If both Xi and Yi

are determined by the same unobserved variables, this assumption is

violated. If Xi and Yi are determined by each other, i.e. simultaneous

equations, this assumption is also violated. For example, if

Y = β0 + β1Xi + ui (55)

Xi = δ0 + δ1Yi + εi (56)

Cor(Xi, ui) 6= 0 if δ1 6= 0 and/or Cor(ui, εi) 6= 0

5. The error term is homoskedastic, i.e. the conditional variance is a

constant.

V ar(ui|Xi) = E(u2i |Xi) = σ2 (57)

6. The error term is serially uncorrelated, i.e. the error term of one obser-

vation is not correlated with the error term of any other observation.

Cov(ui, uj|Xi, Xj) = E(uiuj|Xi, Xj) = 0∀i 6= j (58)

The assumptions of serially uncorrelated and homoskedastic errors al-

low us to obtain an unbiased estimator for the variance of the error

term, and a simple OLS formula for the variance of the coefficient esti-

mate. In addition, we need these two assumptions to demonstrate that

OLS is efficient. In fact, we will see later that other GLS methods are

efficient when these assumptions are violated.

10

Page 12: Econ 329 - Statistical Properties of the OLS estimatorinside.bard.edu/~desilva/econ329/Gauss Markov.pdf · Econ 329 - Statistical Properties of the OLS estimator Sanjaya DeSilva September

There are a few other assumptions that are necessary to obtain OLS

coefficients and standard errors;

1. At least one degree of freedom, i.e. the number of observations must

exceed the number of parameters (n > k +1) where k is the number of

X variables. In the simple regression with one X variable, this means

there should be at least three observations.

2. No X variable should be a deterministic linear function of other X vari-

ables, i.e. no multicollinearity. This condition applies only to multiple

regressions where there are more than one X variable, and is discussed

later.

3. There should be some variation in the X variable. If the X variable

does not vary, it is impossible to estimate the slope of a regression line.

11