Chapter2 Econometrics Old
date post
23-Jan-2016Category
Documents
view
219download
0
Embed Size (px)
description
Transcript of Chapter2 Econometrics Old
The Simple Regression Model
V c Hong V
University of Economics HCMC
June 2015
V c Hong V (UEH) Applied Econometrics June 2015 1 / 1
Some Terminology
In the simple linear regression model, where y = 0 + 1x + u,we typically refer to y as the
depedent variable, or
left-hand side variable, or
explained variable, or
regressand
V c Hong V (UEH) Applied Econometrics June 2015 2 / 1
Some Terminology (cont)
In the simple linear regression of y on x , we typically refer x asthe
independent varialbe, or
right-hand side variable, or
explanatory variable, or
regressor, or
covariate, or
control variable.
V c Hong V (UEH) Applied Econometrics June 2015 3 / 1
A Simple Assumption
The average value of u, the error term, in the population is 0.That is,
E (u) = 0
This is not a restrictive assumption, since wa can always use 0to normalize E (u) to 0.
We need to make a crucial assumption about how u and x arerelated.
We want it to be the case that knowing something about x doesnot give us any information about u, so that they are completelyunrelated. That is, that
E (u|x) = E (u) = 0, which impliesE (y |x) = 0 + 1x .
V c Hong V (UEH) Applied Econometrics June 2015 4 / 1
E (y |x) as a linear function of x , where for any x the distribution of yis centered about E (y |x).
V c Hong V (UEH) Applied Econometrics June 2015 5 / 1
Ordinary Least Squares
Basic idea of regression is to estimate the population parametersfrom a sample.
Let {xi , yi)i = 1, . . . , n} denote a random sample of size n fromthe population.
For each observation in this sample, it will be the case thatyi = 0 + 1xi + uu
V c Hong V (UEH) Applied Econometrics June 2015 6 / 1
Population regression line, sample data points and the associatederror terms.
V c Hong V (UEH) Applied Econometrics June 2015 7 / 1
Deriving OLS Estimates
to derive the OLS estimates we need to realize that our mainassumption of E (u|x) = E (u) = 0 also implies thatCov(x , u) = E (xu) = 0
Why? Remenber from basic probability thatCov(X ,Y ) = E (XY ) E (X )E (Y )We can write our 2 restrictions just in terms ofx , y , 0, and1, since u = y 0 + 1xE (y 0 1x) = 0, andE [x(y 0 1x)] = 0These are called moment restrictions.
V c Hong V (UEH) Applied Econometrics June 2015 8 / 1
Deriving OLS using M.O.M
The method of moments approach to estimation impliesimposing the population moment restrictions on the samplemoments.
What does this mean? Recall that for E (X ), the mean of apopulation distribution, a sample estimator of E (X ) is simplythe arithmetic mean of the sample.
We want to choose values of the parameters that will ensurethat the sample versions of our moment restrictions are true1
n
ni=1(yi 0 1xi) = 0
1
n
ni=n xi(yi 0 1xi) = 0
V c Hong V (UEH) Applied Econometrics June 2015 9 / 1
More Derivation of OLSGiven the definition of a sample mean, and properties ofsummation, we can rewrite the first condition as follows
y = 0 + 1x or
0 = y 1x
ni=1
xi(yi (y 1x) 1xi) = 0n
i=1
xi(yi y) = 1n
i=1
xi(xi x)n
i=1
(xi x)(yi y) = 1n
i=1
(xi x)2
V c Hong V (UEH) Applied Econometrics June 2015 10 / 1
Solve for the OLS estimated slope is
1 =
ni=1(xi x)(yi y)n
i=1(xi x)2
provided thatn
i=1(xi x)2 > 0
0 = y 1x
V c Hong V (UEH) Applied Econometrics June 2015 11 / 1
Summary of OLS slope estimate
The slope estimate is the sample covariance between x and ydivided by the sample variance of x .
If x and y are positively correlated, the slope will be positive.
If x and y are negatively correlated, the slope will be negative.
Only need x to vary in our sample.
Intuitively, OLS is fitting a line through the sample points suchthat the sum of squared residuals is as small as possible, hencethe term least squares.
The residual, u, is an estimate of the error term, u, and is thedifference between the fitted line (sample regression function)and the sample point.
V c Hong V (UEH) Applied Econometrics June 2015 12 / 1
Sample regression line, sample data points and the associatedestimated error terms.
V c Hong V (UEH) Applied Econometrics June 2015 13 / 1
Alternative approach to derivationGiven the intuitive idea of fitting a line, we can set up a formalminimize problemThat is, we want to choose our parameters such that weminimize the following:n
i=1(u2) =
ni=1(yi 0 1xi)2
If one uses calculus to solve the minimization problem for thetwo parameters you obtain the following first order condtions,which are the same as we obtained before, multiplied by n.
ni=1
(yi 0 1xi) = 0
ni=1
xi(yi 0 1xi) = 0
V c Hong V (UEH) Applied Econometrics June 2015 14 / 1
Algebraic Properties of OLS
The sum of the OLS residuals is zero
Thus, the sample average of the OLS residuals is zero as well
The sample covariance between the regressors and the OLSresiduals is zero
The OLS regression line always goes through the mean of thesample.n
i=1 ui = 0 and thus
ni=1 uin
= 0ni=1 xi ui = 0
y = 0 + 1x
V c Hong V (UEH) Applied Econometrics June 2015 15 / 1
More terminology
We can think of each observation as being made up of anexplained part, and an unexplained part, yi = yi + ui . We thendefine the following:n
i=1(yi y)2 is the total sum of squares (SST)ni=1(yi y)2 is the explained sum of squares (SSE)ni=1 ui
2 is the residual sum of squares (SSR)
Then SST = SSE + SSR
V c Hong V (UEH) Applied Econometrics June 2015 16 / 1
Proof that SST = SSE + SSR
(yi y)2 =
[(yi yi) + (yi y)]2
=
[ui + (yi y)]2
=
ui2 + 2
ui
2(yi y) +
(yi y)2
= SSR + 2
ui2(yi y) + SSE
and we know that
ui(yi y) = 0
V c Hong V (UEH) Applied Econometrics June 2015 17 / 1
Goodness-of-fit
How do we think about how well our sample regression line fitour sample data?
Can compute the fraction of the total sum of squares (SST) thatis explained by the model, call this the R-squared of regression
R2 =SSE
SST= 1 SSR
SST
V c Hong V (UEH) Applied Econometrics June 2015 18 / 1
Unbiasedness of OLS
Assume the population model is linear in parameters asy = 0 + 1x + u
Assume we can use a random sample of size n,{(xi , yi) : i = 1, 2, . . . , n}, from the population model. Thus wecan write the sample model yi = 0 + 1xi + ui
Assume E (u|x) = 0 and thus E (ui |xi) = 0Assume there is variation in the xi
In order to think about unbiasedness, we need to rewrite ourestimator in term of the population parameters.
Start with a simple rewrite of the formula as
1 =
(xi x)yis2x
, where
s2x =
(xi x)2V c Hong V (UEH) Applied Econometrics June 2015 19 / 1
Unbiasedness of the OLS
(xi x)yi =
(xi x)(0 + 1xi + ui)
=
(xi x)0 +
(xi x)1xi +
(xi x)ui= 0
(xi x) + 1
(xi x)xi +
(xi x)ui
(xi x) = 0(xi x)xi = 0
V c Hong V (UEH) Applied Econometrics June 2015 20 / 1
Unbiasedness of OLS (cont)
so, the numerator can be written as 1s2x +
(xi x)ui , andthus
1 = 1 +
(xi x)ui
s2xlet di = (xi x), so that1 = 1 + (
1
s2x)
diui , then
E (1) = 1 + (1
s2x)
diE (ui) = 1
V c Hong V (UEH) Applied Econometrics June 2015 21 / 1
Unbiasedness Summary
The OLS estimates of 1 and 0 are unbiased
Proof of unbiasedness depends on our 4 assumptions - if anyassumption fails, then OLS is not neccessarily unbiased
Remember unbiasedness is a description of the estimator - in agiven sample we may be "hear" or "far" from the trueparameter.
V c Hong V (UEH) Applied Econometrics June 2015 22 / 1
Variance of the OLS Estimators
Now we know that sampling distribution of our estimate iscentered around the true parameter
We want to think about how spead out this distribution is
much easier to think about this variance under an additionalassumption, so
Assume Var(u|x |) = 2 (Homoskedasticity)
V c Hong V (UEH) Applied Econometrics June 2015 23 / 1
Variance of OLS (cont)
Var(u|x) = E (u2|x) [E (u|x)]2E (u|x |) = 0, so 2 = E (u2|x) = E (u2) = Var(u)Thus 2 is also the unconditional variance, called the errorvariance
, the square root of the error variance is called the standarddeviation of the error
Can say: E (y |x) = 0 + 1x and Var(y |x) = 2
V c Hong V (UEH) Applied Econometrics June 2015 24 / 1
Homoskedastic Case
V c Hong V (UEH) Applied Econometrics June 2015 25 / 1
Heteroskedastic Case
V c Hong V (UEH) Applied Econometrics June 2015 26 / 1
Variance of OLS (cont)
Var(1) = Var(1 +1
s2x
diui)
= (1
s2x)2Var(
diui)
= (1
s2x)2
d2i Var(ui)
= (1
s2x)2
d2i 2 = 2(
1
s2x)2
d2i
= 2(1
s2x)2 =
2
s2x= Var(1)
V c Hong V (UEH) Applied Econometrics June 2015 27 / 1
Variance of OLS Summary
The larger the error variance, 2, the larger the variance of theslope estimate.
The larger the variability in the x , the smaller the variance of theslope estimate.
As a result, a