MA Advanced Econometrics: Applying Least Squares to Time Econometrics/part3.pdf · PDF...

Click here to load reader

  • date post

  • Category


  • view

  • download


Embed Size (px)

Transcript of MA Advanced Econometrics: Applying Least Squares to Time Econometrics/part3.pdf · PDF...

  • MA Advanced Econometrics:Applying Least Squares to Time Series

    Karl Whelan

    School of Economics, UCD

    February 15, 2011

    Karl Whelan (UCD) Time Series February 15, 2011 1 / 24

  • Part I

    Time Series: Standard Asymptotic Results

    Karl Whelan (UCD) Time Series February 15, 2011 2 / 24

  • OLS Estimates of AR(n) Models Are Biased

    Consider the AR(1) model

    yt = yt1 + t (1)

    The OLS estimator for a sample of size T is


    Tt=2 yt1ytTt=2 y



    = +

    Tt=2 yt1tTt=2 y



    = +T


    (yt1Tt=2 y


    )t (4)

    t is independent of yt1, so E (yt1t) = 0. However, t is notindependent of the sum

    Tt=2 y

    2t1. If is positive, then a positive shock

    to t raises current and future values of yt , all of which are in the sumTt=2 y

    2t1. This means there is a negative correlation between t and

    yt1Tt=2 y


    , so E < . (More on this later.)

    Karl Whelan (UCD) Time Series February 15, 2011 3 / 24

  • Time Series: Serially Dependent Observations

    OLS estimates of AR models are biased. What about consistency? Do theestimates get closer to the correct value as samples get larger? The recipefor deriving asympototic properties of estimators has been to use a Law ofLarge Numbers and a Central Limit Theorem. Up to now, we have onlydiscussed regressions using observations that are independently distributedand have used versions of the LLN and CLT for independent observations.

    However, observations from time series are not independent. For instance,for an AR(n) process of the form

    yt = + 1yt1 + 1yt1 + ... + nytn + t (5)

    each observation depends on what happens in the past.

    LLNs and CLTs may not work for time series. For example, suppose yt ishighly autocorrelated, so that when the series has high values, it tends tostay high and when the series is low, it tends to stay low. This might meanthat even if we have a lot of observations, we cant necessarily be sure thatthe sample average is a good estimator of the population average.

    It turns out there are some conditions under which WLLNs and CLTs holdfor time series but these conditions sometimes dont hold.

    Karl Whelan (UCD) Time Series February 15, 2011 4 / 24

  • Some Definitions

    We say that a time series of observations {yt} is covariance (weakly)stationary if E yt = for all t and Cov (yt , ytk) is independent of t.

    As yt moves up and down, then E (yt |yt1) will generally change. However,the stationarity here refers to the ex ante unconditional distribution, i.e. thedistribution of outcomes that would have been expected before time hasbegun.

    We say that {yt} is strictly stationary if the unconditional joint distributionof {yt , yt1, ...., ytk} is independent of t for all k.

    For a weakly stationary series, let Cov (yt , ytk) = (k). We say that {yt} isergodic if (k) 0 as k .

    Let F t = {yt , yt1, ...., ytk}. We say that et is a martingale differencesequence (MDS) if E (et | F t1) = 0.

    Armed with these definitions, we can state some theorems that allow us tomake statements about the asympototic behaviour of least squares estimatesof time series models.

    Karl Whelan (UCD) Time Series February 15, 2011 5 / 24

  • Time Series Versions of LLN and CLT

    Ergodic Theorem: If yt is strictly stationary and ergodic and E |yt | < than as T




    yip E (yt) (6)

    MDS Central Limit Theorem: If ut is a strictly stationary and ergodicMDS and E (utut) = < , then as T



    uid N (0,) (7)

    The following will be useful in applying these results

    I If yt is strictly stationary and ergodic and xt = f (yt , yt1.....) is arandom variable, then xt is also strictly stationary and ergodic.

    I For the AR(1) model yt = + yt1 + t , the series is strictlystationary and ergodic if || < 1.

    I For the AR(k) model yt = + 1yt1 + ... + kytk + t , the series isstrictly stationary and ergodic if the roots of the polynomial1L + .... + kL

    k are all less than one in absolute value.

    Karl Whelan (UCD) Time Series February 15, 2011 6 / 24

  • Estimating an AR(k) Regression

    Consider estimating AR(k) model yt = + 1yt1 + ... + kytk + t . Let

    xt =(

    1 yt1 yt2 ... ytk)



    1 2 ... k)


    The vector xt is strictly stationary and ergodic, which means that xtxt also

    is. Thus, we can use the ergodic theorem to show that





    p E (xtx t) = Q (10)

    We can also show that xtt is stationary and ergodic, so




    xttp E (xtt) = 0 (11)

    This means OLS estimators, though biased, are consistent:

    = +









    )p Q1.0 = 0 (12)

    Karl Whelan (UCD) Time Series February 15, 2011 7 / 24

  • Asymptotic Distribution of OLS Estimator

    Let ut = xtet . This is a MDS because

    E (ut | F t1) = E (xtet | F t1) = xt E (et | F t1) = 0 (13)

    Applying the MDS version of the CLT



    xtetd N (0,) (14)

    where = E





    This means that if yt is an AR(k) process that is strictly stationary andergodic and E y4t < then


    )d N



    The condition E y4t < is required for the covariance matrix to be finite.

    Karl Whelan (UCD) Time Series February 15, 2011 8 / 24

  • Example AR(1) Regression

    Consider using OLS to estimate the coefficient for the series

    yt = yt1 + t (17)

    The previous results tell us that

    T ( ) d N (0, ) (18)




    )(E y2t1

    )2 (19)Letting Var (t) =

    2 , we can calculate the asymptotic variance of yt from

    Var (yt) = 2Var (yt1) +

    2 Var (yt)

    21 2

    = 2y (20)




    )(E yt1)2



    2y)2 = 22y = 1 T ( ) d N (0, 1 2)


    Karl Whelan (UCD) Time Series February 15, 2011 9 / 24

  • Part II

    Unit Roots

    Karl Whelan (UCD) Time Series February 15, 2011 10 / 24

  • What Happens When = 1?

    Consider the processyt = yt1 + t (22)

    where Var (t) = 2 for all t and which began with the observation y0.

    We can apply repeated substitution to this series to get

    yt = t + t1 + t2 + ..... + y0 (23)

    This implies that

    Var (yt) = 2t (24)

    Cov (yt , ytk) = Cov (t + t1 + ....., tk + tk1 + .....) (25)

    = 2 (t k) (26)

    This series is not covariance stationary (the covariances depend on t) andits not ergodic (covariances with far past observations dont go to zero).

    If we consider a process of form yt = + yt1 + t , then its not covariancestationary, not ergodic and E yt as t .

    Karl Whelan (UCD) Time Series February 15, 2011 11 / 24

  • Asymptotics with Unit Root Processes?

    We have derived that the OLS to estimator applied to the AR(1) process

    yt = yt1 + t has the property that

    T ( ) d N(0, 1 2


    You might be tempted to think that we can use the logic underlying thisargument to prove that variance of this asymptotic distribution goes to zerowhen = 1, i.e. that the distribution collapses on the true value . It turnsout that this is indeed the case. However, you cannot use any of theprevious arguments to prove this.

    Indeed, none of the previous arguments proving asymptotic normality holdbecause the assumptions underlying the Ergodic Theorem or the MDSversion of the CLT do not hold in this case. The yt series

    I Is not covariance stationary (variances increase as t gets larger).I Is not ergodic (the covariance with long-past observations does not go

    to zero).I Does not tend towards a finite variance

    Similarly, the previous arguments do not apply to any AR(k) series in whichone is a root of the polynomial 1 1L .... kLk . Series of this sort arereferred to as unit root processes.

    Karl Whelan (UCD) Time Series February 15, 2011 12 / 24

  • Non-Normal Distributions When = 1 With No Drift

    For the process for which yt = yt1 + t (i.e. a unit root with no interceptor drift) then the asymptotic distribution of the OLS estimator dependsupon the regression specificiation:

    1 If no intercept is included and yt is regressed on yt1, the distributionof is non-Normal and skewed with most of the estimates below one(see next page) and the distribution of the t statistic testing H0 : = 1is nonstandard.

    2 If an intercept is included in the regression, then the skewness anddownward bias are far more serious (see the page after next). Thecritical value for rejecting the null of = 1 at the 5% level changesfrom -1.95 with no intercept to -2.86 in large samples.

    3 The usual test procedures for these cases involve using the criticalvalues derived by Dickey and Fuller (1976). While an analyticalasympototic distribution exists, people usually use values fromfinite-sample distributions obtained by Monte Carlo (computersimulation) methods.

    Karl Whelan (UCD) Time Series February 15, 2011 13 / 24

  • Distribution of Under Unit Root (No Drift): NoConstant in Regression (T = 1000)

    0.96 0.97 0.98 0.99 1.00 1.010









    225 Mean 0.99824 Std Error 0.00319 Skewness -2.21045 Exc Kurtosis 7.72654

    Karl Whelan (UCD) Time Series February 15, 2011 14 / 24

  • Distribution of t test of H0 : = 1 Under Unit Root (NoDrift): No Constant in Regression (T = 1000)

    -5.0 -2.5 0.0 2.5 5.00.00