Time Series Analysis Exercises

30
1 Universität Potsdam Time Series Analysis Exercises Hans Gerhard Strohe Potsdam 2005

Transcript of Time Series Analysis Exercises

Page 1: Time Series Analysis Exercises

1

Universität Potsdam

Time Series Analysis Exercises

Hans Gerhard Strohe

Potsdam 2005

Page 2: Time Series Analysis Exercises

2

I Typical exercises and solutions

1 For theoretically modelling the economic development of national economy scenarios

the following 2 models for GDP increment are analysed:

a. Yt = Yt-1 + at

b. Yt = 1.097 Yt-1 - 0,97 Yt-2 + at ,

where Y - GDP increment

a - White noise with zero mean and constant variance σ2=100

t - Time (quarters starting with Q1, 1993)

Check by an algebraic criterion which one is stationary.

Page 3: Time Series Analysis Exercises

3

Answer:

a) The process can be written

φ(L)Yt = at with the lag polynom

φ(L) = 1-L and L being the lag or backshift operator LYt.= Yt-1

From this follows the characteristic equation

φ(z) = 1-z = 0

The root, i.e. the solution of it, is z=1. It is not higher than 1, i.e. it is not placed outside the unit circle. The process is not stationary. It is a “unit root” process or more specific a random walk.

b) The process can be written

φ(L)Yt = at with the lag polynom

φ(L) = 1 – 1,079L + 0,97 L2.

From this follows the characteristic equation

φ(z) = 1- 1,097z + 0,97 z2 = 0

The roots (complex numbers) of it

are z1 = 0,57 + 0,84 i

z2 = 0,57 - 0,84 i

with 1 1,0150,84 0,57|| 222/1 >=+=z .

That means that both of the roots are outside the unit circle what is a sufficient condition for the stationarity of the process.

Page 4: Time Series Analysis Exercises

4

2 The variable l (=labour) in the file employees.dat denotes the quantity of labour

force, i. e. the number of employees, in a big German company from January 1995

till December 2004.

i. Display the graph of the time series lt.

ii. What are the characteristics of a stationary time series? Is the time series lt likely

to be stationary? Check it first by the naked eye.

iii. Test the stationarity of lt by a suitable procedure. Determine the degree of

integration.

iv. Estimate the correlation and the partial correlation function of the process. Give

a description and an interpretation.

v. What type of basic model could fit the time series lt. Why?

vi. Estimate the parameters concerning the model assumed in question v .

vii. Estimate alternative models and compare them by suitable indicators.

viii. Forecast the time series for 2005.

Page 5: Time Series Analysis Exercises

5

ix. A particular analysis method of time series lt results in the following graph.

Bartlett

Tukey

Parzen

Circular frequency

0.6

0.8

1.0

1.2

1.4

1.6

0 1 2 3 4

Fig. 1: A special diagnostic function

What is the name of this particular analysis method? What can you derive from the curve. What should be the typical shape of the function estimated for a process type assumed in question v?

Page 6: Time Series Analysis Exercises

6

Answers:

i. The graph of the time series lt.

4000

4400

4800

5200

5600

6000

95 96 97 98 99 00 01 02 03 04

L

Fig. 2: Graph of the employees time series

ii. The main characteristics of a stationary time series are that the mean µ t and the variance of the stochastic process generating this special time series are independent from time t,

µ t = µ = const σ t2 = σ 2 = const, and that the autocovariances γ

t1 , t2

depend only on the time difference τ :

γ

t1 , t2

= γ t1

- t2 = γ τ with τ = t 1 – t 2 (Lag)

A check of the graph by the naked eye gives no reason to assume the time series not to be stationary. At the first glance there does not appear any trend or relevant development of the variance.

Page 7: Time Series Analysis Exercises

7

iii. Test of stationarity by DF Test.

The following Dickey Fuller regression of ∆l on lt-1 produces a t-value – 3,63. Application of an augmented Dickey-Fuller regression is not necessary because the augmented model would have higher values of the Schwarz criterion (SIC, version on the base of the error variance).

Table 1:

Dickey-Fuller Test Equation Dependent Variable: ∆lt Method: Least Squares Sample (adjusted): 1995M02 2004M12; observations: 119

Variable Coefficient Std. Error t-Statistic Prob.

lt-1 -0.200468 0.055277 -3.626566 0.0004 C 1096.460 301.8879 3.632011 0.0004

R-squared 0.101051 Mean dependent var 2.038202 Adjusted R-squared 0.093368 S.D. dependent var 92.83931 S.E. of regression 88.39903 Akaike info criterion(σ) 11.81826 Sum squared resid 914283.5 Schwarz criterion(σ) 11.86497

The critical value of the t-statistic for a model with intercept c is -2,89 on the 5 % level:

Table 2:

Null Hypothesis: l has a unit root Exogenous: Constant Lag Length: 0 (Automatic based on SIC, MAXLAG=12)

t-Statistic

Test critical values: 1% level -3.486064 5% level -2.885863

The t-value measured exceeds the critical value downwards. That means that the null hypothesis of nonstationarity or a unit root can be rejected on the 5 % level. The time series is stationary at least with a probability of 95 %. As lt is stationary its degree of integration is 0 ( I(0)).

Page 8: Time Series Analysis Exercises

8

iv. Sample correlation and partial correlation functions of the process.

The sample autocorrelation function (AC) for a short series can be calculated using the formulae for the sample autocovariance (9.7)

τ

τ

τ

τ −

−−=∑−

=+

T

xxxxc

T

ttt

1))((

and the sample autocorrelation (9.8)

2 xs

cr ττ =

with x and sx being the average and the standard deviation of the time series, respectively. The partial sample autocorrelation function (PAC) can be obtained by linear OLS regression of xt on xt-1, xt-2,…, xt-τ corresponding to formula (9.9):

ττττ φφφ −− +++= ttt xxx ... 110

The partial correlation coefficient ρpart(τ) is the coefficient φττ of the highest order lagged variable xt-τ. By calculating the first 5 regressions this way we obtain the following highest order coefficients for each of them, respectively: ф11= 0,799532; ф22= -0,037588; ф33= 0,075722; ф44= 0,008910; ф55= 0,115970 We can compare these „manually“ calculated coefficient with the partial autocorrelation functions displayed in the following table and figure together with the sample autocorrelation function delivered as a whole in one step by eViews. The differences between the regression results and the eViews output probably are caused by different estimation methods.

Page 9: Time Series Analysis Exercises

9

Table 3:

τ AC PAC τ AC PAC τ AC PAC

1 0.794 0.794 9 0.039 -0.008 17 -0.157 -0.066

2 0.616 -0.042 10 0.014 0.049 18 -0.154 0.038

3 0.495 0.052 11 -0.004 -0.027 19 -0.125 0.042

4 0.390 -0.028 12 -0.050 -0.092 20 -0.096 0.038

5 0.348 0.118 13 -0.061 0.069 21 -0.098 -0.083

6 0.298 -0.038 14 -0.100 -0.082 22 -0.142 -0.127

7 0.205 -0.115 15 -0.139 -0.042 23 -0.150 0.046

8 0.109 -0.083 16 -0.137 0.013 24 -0.183 -0.134

Correlogram

-0,4

-0,2

0

0,2

0,4

0,6

0,8

1

1,2

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

ACPAC

Fig. 3: Correlogram

The bold curve shows the sample autocorrelation function. The thin one displays the partial autocorrelation, both depending on the lag τ. While the smooth autocorrelation continuously decreases from 1 towards zero and below to a limit of about 0,1 the partial autocorrelation starts as well at the value 1 and has the same value as the autocorrelation for τ=1 but then drops down to zero and remains there oscillating between -0,1 and 0,1.

Page 10: Time Series Analysis Exercises

10

v. Possible type of basic model fitting the time series l.

The basic model could be a first order autoregressive process , because AC is exponentially decreasing and PAC is dropping down after τ=1. The shape of the AC is typical for the autocorrelation of an AR process with positive coefficients. The partial autocorrelation drops down to and remains close to zero after the lag τ=1 what indicates an AR(1) process.

vi. Estimated parameters concerning the model assumed in answer v. Table 4: Dependent Variable: lt Method: Least Squares Sample 1995M02 2004M12, observations: 119

Variable Coefficient Std. Error t-Statistic Prob.

CONST 1096.460 301.8879 3.632011 0.0004 lt-1 0.799532 0.055277 14.46398 0.0000

R-squared 0.641332 Mean dependent var 5461.386 Adjusted R-squared 0.638266 S.D. dependent var 146.9782 S.E. of regression 88.39903 Akaike info criterion 11.81826 Sum squared resid 914283.5 Schwarz criterion 11.86497

Thus we get the empirical model:

lt = 1096.5+ 0.7995 lt-1 + at

Because the t-statistic of both exceeds the 5-% critical value 1,96, both coefficients are significant on this level, at least. The prob. values behind indicate that the even are on the 1 % level.

Page 11: Time Series Analysis Exercises

11

Another way of estimation is to take advantage of the special ARMA estimation procedure of eViews. Here first the mean of the variable is estimated and then the AR(1) coefficient of an AR model of the deviations from the mean:

Table 5:

Dependent Variable: l Method: Least Squares Sample: 1995M02 2004M12; observations: 119 Convergence achieved after 3 iterations

Variable Coefficient Std. Error t-Statistic Prob.

CONST 5469.515 40.52025 134.9823 0.0000 AR(1) 0.799532 0.055277 14.46398 0.0000

R-squared 0.641332 Mean dependent var 5461.386 Adjusted R-squared 0.638266 S.D. dependent var 146.9782 S.E. of regression 88.39906 Akaike info criterion(σ) 11.81826 Sum squared resid 914284.2 Schwarz criterion(σ) 11.86497

The equivalent model obtained this way can be written:

(lt - 5469,5) = 0,7995 ( lt-1 - 5469.5) + at

Page 12: Time Series Analysis Exercises

12

vii. As an alternative, an ARMA(2,1) model would fit the data. It can be found by trial and error via several different models and the aim of obtaining significant coefficients and compared by Schwarz criterion.

As the model contains an MA term ordinary least squares is not practicable for estimating the coefficients. Therefore again the iterative nonlinear least squares procedure by eViews is used:

Table 6: Dependent Variable: l Method: Least Squares Sample 1995M03 2004M12; observations: 118 Convergence achieved after 7 iterations Backcast: 1994M12

Variable Coefficient Std. Error t-Statistic Prob.

CONST 5469.200 37.73310 144.9444 0.0000 AR(2) 0.600156 0.093632 6.409735 0.0000 MA(1) 0.856271 0.060133 14.23960 0.0000

R-squared 0.646928 Mean dependent var 5462.352 Adjusted R-squared 0.640787 S.D. dependent var 147.2253 S.E. of regression 88.23855 Akaike info criterion(σ) 11.82306 Sum squared resid 895394.8 Schwarz criterion(σ) 11.89350

Again here is to be considered that the “constant” is the mean and the coefficients belong to a model of derivations from it.

Thus the ARMA(2,1) model is to be written:

(lt - 5469,2) = 0,6002 ( lt-2 - 5469.5) + 0,8563 at + at

In order to have the model in the explicit form, the mean is subtracted:

lt = 5469,2 + 0,6002 ( lt-2 - 5469.2) + 0,8563 at + at

= 5469,2 - 5469.2 × 0,6002 .+ 0,6002 lt-2 + 0,8563 at + at

and at last

lt = 2186,6 .+ 0,6002 lt-2 + 0,8563 at + at

This model does not show an improvement compared by Akaike and Schwarz criteria with the AR(1) estimated earlier. It has slightly higher values. EViews provides these criteria on the base of the error variance which is to be minimised (in contrast to those on likelihood base such as in Microfit that are to be maximised).

Page 13: Time Series Analysis Exercises

13

viii. Forecast of the number of employees for 2005:

The somewhat uneasy shape of both models:

(lt - 5469,5) = 0,7995 ( lt-1 - 5469.5) + at

and

(lt - 5469,2) = 0,6002 ( lt-2 - 5469.5) + 0,8563 at + at

give us the advantage of easily forecasting the process in the long run because the stochastic

limes for time tending to infinity is given by what we called the means 5469,5 and 5469,2,

respectively:

plim (lt - 5469,5) = plim (0,7995 ( lt-1 - 5469.5) + at)

= plim (0,7995 ( lt-1 - 5469.5) + plim at

= 0 + 0.

That means plim lt = 5469,5 and we can take this constant as suitable forecast for the farther future.

In the same way we obtain from the second model the forecast 5469,2 that does not differ very much.

The following graph displays the forecast for the years 2004 and 2005. In 2004 we obtain an irregular curve, because the one-month ahead forecast for every month can be calculated on the base of the varying values of the previous month. But in 2005, there is a smooth exponential curve because the forecasts can be calculated only on the base of the previous forecast instead of the real data.

Page 14: Time Series Analysis Exercises

14

5000

5100

5200

5300

5400

5500

5600

5700

5800

2004M01 2004M07 2005M01 2005M07

Forecast of the number of employees

Fig. 4: Forecast

ix. Spectral analysis is the name of this method.

Basically, the main peaks of spectral density occur at the circular frequencies

ω1= 0,71 ω2 = 1,86.

These correspond to frequencies

f1 = 0,114 f2 = 0,295

and to periods of average length

p1 = 8,8 p2 = 3,4 month, respectively.

But these periodicities are not of any practical importance. They superimpose to the point of being unrecognizable. They are not visible in the original time series and the correlogram. Taking standard errors of the spectral estimates into consideration, peaks and troughs of the spectral density curve does not significantly differ.

The typical shape of the spectral density function for an AR(1) process is similar to the following

Page 15: Time Series Analysis Exercises

15

Estimates of spectral density of an AR(1) process

Bartlett

Tukey

Parzen

Circular frequency

0

2

4

6

8

0 1 2 3 4

Fig. 5: Estimates of spectral density of an AR(1) process

In case of an additional seasonal component the spectral density would show a peak over the frequency f = 1/12 or the circular frequency ω = 2π/12 = 0,52.

Page 16: Time Series Analysis Exercises

16

3 The file dax_j95_a04.txt contains the daily closing data of the main German stock

price index DAX from January 1995 by August 2004, i.e. 2498 values.

i. Display the graph of the time series dax.

ii. Determine the order of integration of dax.

iii. Generate the series of the growth rate or rate of return r in the direct way and in

the logarithmic way. Display both graphs of r.

iv. Compare the r data. Characterize the general patterns of both graphs of r

v. Check whether or not r is stationary.

vi. Estimate the correlation function and the partial correlation function till a lag of

20. Characterize generally the extent of correlation in this series.

vii. Estimate an AR(8) model to r that contains only coefficients significant at least on

the five percent level

viii. Estimate an ARIMA(8,d,8) model to r that contains only coefficients significant at

least on the one percent level.

ix. Test the residuals of this model for autoregressive conditional heteroscedasticity

by a rough elementary procedure

x. Make visible conditional heteroscedasticity by estimating a 5-day moving variance

of the residuals

xi. Estimate an ARCH(1) model on the base of the ARIMA(8,d,8) model estimated in

viii. You can change the ARMA part in order to keep only coefficients significant

on the one percent level.

xii. Estimate a GARCH(1,1) model on the base of the simple model r=const.

Page 17: Time Series Analysis Exercises

17

Answers:

i. The graph of dax.

0

1000

2000

3000

4000

5000

6000

7000

8000

9000

500 1000 1500 2000

DAX

Fig. 6: The graph of the German share price index

This graph indicates a non-stationary process, perhaps a random walk. It is characterized by changing stochastic trends and increasing variance.

Page 18: Time Series Analysis Exercises

18

ii. The order of integration of dax.

For testing whether or not dax is stationary first the original date are tested by the Dickey-Fuller test of the levels. The following table shows the estimation of the coefficient of the Dickey-Fuller Test regression of ∆daxt on daxt-1 :

Table 7: Dickey-Fuller Test Equation Dependent Variable: ∆daxt Method: Least Squares Sample (adjusted): 2 2498; observations: 2497

Variable Coefficient Std. Error t-Statistic Prob.

daxt-1 -0.001453 0.000919 -1.580936 0.1140 C 6.970114 4.213823 1.654107 0.0982

R-squared 0.001001 Mean dependent var 0.701418 Adjusted R-squared 0.000600 S.D. dependent var 71.28183 S.E. of regression 71.26043 Akaike info criterion(σ) 11.37136 Sum squared resid 12669731 Schwarz criterion(σ) 11.37602

The t-value of the slope coefficient is -1.580936. As the following table demonstrates it is not less than the 5%-critical t-value for a model with a constant, i.e. -2.862497.

Table 8: Null Hypothesis: DAX has a unit root Exogenous: Constant Lag Length: 0 (Automatic based on SIC, MAXLAG=26)

t-Statistic Prob.*

Augmented Dickey-Fuller test statistic -1.580936 0.4922 Test critical values: 1% level -3.432775 5% level -2.862497

That means that dax is on the 5% level not stationary. The next step is to check the first differences of dax in the same way.

Page 19: Time Series Analysis Exercises

19

The Dickey-Fuller test equation of the first differences is a regression of the second differences ∆2daxt on the lagged first differences ∆daxt-1 .

Table 9:

Augmented Dickey-Fuller Test Equation Dependent Variable: D(DAX,2) = ∆2daxt Method: Least Squares Sample: 3 2498; observations: 2496

Variable Coefficient Std. Error t-Statistic Prob.

D(DAX(-1))= ∆daxt-1 -0.996234 0.020026 -49.74669 0.0000 C 0.702902 1.427408 0.492432 0.6225

R-squared 0.498061 Mean dependent var -0.017540Adjusted R-squared 0.497860 S.D. dependent var 100.6319 S.E. of regression 71.30959 Akaike info criterion(σ) 11.37274 Sum squared resid 12682133 Schwarz criterion(σ) 11.37741

The t-value -49.74669 obviously exceeds all thinkable critical values.

Thus the first differences are stationary and dax itself is first order integrated I(1).

iii. The growth rate or rate of return r can be generated as the ratio

That is displayed in the following graph:

1

1 )(−

−−=t

ttt dax

daxdaxr

Page 20: Time Series Analysis Exercises

20

Fig. 7: Graph of the growth rate of DAX

Another way of calculating the rate of return mostly used in financial market analysis is the logarithmic way:

1lnln −−= ttt daxdaxr

The next figure shows the graph of this logarithmically generated rate of return r.

Fig. 8: Graph of the logarithmic return rate of DAX

-.12

-.08

-.04

.00

.04

.08

500 1000 1500 2000

R _ R A T IO

-.12

-.08

-.04

.00

.04

.08

500 1000 1500 2000

R _ L O G

Page 21: Time Series Analysis Exercises

21

iv. Both curves are very similar to each other and can hardly be distinguished by the naked eye. An enlarged display of the differences as shown in the next figure demonstrates that the ratio always slightly exceeds the logarithmic approximation. In the following analysis the latter will be used.

Differences between alternative series of dax return rate r

.000

.001

.002

.003

.004

.005

500 1000 1500 2000

D IF F _ R

Fig. 9: Graph of the differences between both ways of calculating return rates

Both graphs of r show certain common general patterns. These are significant clusters of high variability or volatility separated by quieter periods. This changing behaviour of variance is typical for ARCH or GARCH processes.

v. Stationarity test for r.

Particularly for later fitting an ARCH or GARCH model, the time series should be stationary.

The following Dickey Fuller Regression of ∆rt on rt-1 produces a negative t-value of -50.377 that lies below of all possible critical values: The return rate r proves stationary.

Page 22: Time Series Analysis Exercises

22

Table 10:

Dickey-Fuller Test Equation Dependent Variable: ∆rt Method: Least Squares Sample (adjusted): 3 2498; observations: 2496

Variable Coefficient Std. Error t-Statistic Prob.

rt-1 -1.008850 0.020026 -50.37696 0.0000 C 0.000250 0.000320 0.780708 0.4350

R-squared 0.504356 Mean dependent var -3.68E-06 Adjusted R-squared 0.504157 S.D. dependent var 0.022675 S.E. of regression 0.015967 Akaike info criterion(σ) -5.435790Sum squared resid 0.635829 Schwarz criterion(σ) -5.431125

vi. Sample correlation function and partial correlation functions of r

Correlogram

-0,2

0

0,2

0,4

0,6

0,8

1

1,2

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

AC

PAC

Fig. 10: Graph of sample correlation and partial correlation function

Page 23: Time Series Analysis Exercises

23

Table 11:

τ AC PAC τ AC PAC τ AC PAC

1 -0.009 -0.009 13 -0.017 -0.015 25 0.005 0.004 2 -0.005 -0.005 14 0.071 0.076 26 -0.048 -0.048 3 -0.032 -0.032 15 0.027 0.028 27 -0.046 -0.036 4 0.027 0.026 16 -0.024 -0.026 28 0.021 0.014 5 -0.041 -0.040 17 -0.038 -0.030 29 0.056 0.048 6 -0.042 -0.043 18 0.001 -0.001 30 0.025 0.033 7 -0.006 -0.006 19 -0.048 -0.048 31 0.020 0.030 8 0.042 0.038 20 0.023 0.029 32 0.010 0.001 9 -0.005 -0.005 21 0.025 0.031 33 -0.001 -0.001 10 -0.007 -0.007 22 0.005 -0.008 34 -0.014 -0.006 11 0.012 0.011 23 -0.002 -0.004 35 -0.021 -0.022 12 0.014 0.010 24 0.033 0.033 36 0.019 0.017

Let us try a general characterisation of the extent of correlation in this series: The sample correlation and partial correlation function do not differ very much. The graphs of both of them coincide. There is not any significant correlation value. This could be the functions of a white noise. Anyway, because of the little peaks at 5, 6 and 8 it could be sensible try to fit an AR(8) model.

vii. Example: AR(8) model coefficients significant on the five percent level. Several trials with AR coefficient up to the order 8 finally resulted in three significant coefficients and a missing constant. The criterion for rejecting other variants was non- significance of coefficients. Table 12: Dependent Variable: r Method: Least Squares Sample (adjusted): 10 2498; observations: 2489

Variable Coefficient Std. Error t-Statistic Prob.

rt-5 -0.039842 0.020023 -1.989799 0.0467 rt-6 -0.041799 0.020013 -2.088571 0.0368 rt-8 0.039960 0.020028 1.995155 0.0461

Mean dependent var 0.000244 S.D. dependent var 0.015982 S.E. of regression 0.015951 Akaike info criterion -5.437444Sum squared resid 0.632487 Schwarz criterion -5.430429

Page 24: Time Series Analysis Exercises

24

Thus we obtain by OLS regression the AR(8) model rt = -0.039842 rt-5 - 0.041799 rt-6 + 0.039960 rt-8 + et , where et is the error term of the process estimated. We get exactly the same results, if we consider this AR model as a special case of an ARMA model estimated by iterative nonlinear least squares Table 13: Dependent Variable: r Method: Least Squares Sample (adjusted): 10 2498; observations: 2489 Convergence achieved after 3 iterations

Variable Coefficient Std. Error t-Statistic Prob.

AR(5) -0.039842 0.020023 -1.989799 0.0467 AR(6) -0.041799 0.020013 -2.088571 0.0368 AR(8) 0.039960 0.020028 1.995155 0.0461

Mean dependent var 0.000244 S.D. dependent var 0.015982 S.E. of regression 0.015951 Akaike info criterion(σ) -5.437444Sum squared resid 0.632487 Schwarz criterion(σ) -5.430429

viii. Example: ARIMA(8,d,8) model for r that contains only coefficients significant at least on the one percent level.

Now we try to improve the model by including a moving average term. In this case we must use the iterative nonlinear least squares method. Now the aim is to obtain coefficients, significant on the 1% level. Again after a series of trials with varying ARMA(8,8) models we got finally the following results by omitting non significant coefficients:

Page 25: Time Series Analysis Exercises

25

Table 14: Dependent Variable: r Method: Least Squares Sample (adjusted): 10 2498; observations: 2489 Convergence achieved after 15 iterations

Variable Coefficient Std. Error t-Statistic Prob.

AR(3) -0.339258 0.025338 -13.38956 0.0000 AR(8) -0.613633 0.025849 -23.73905 0.0000 MA(3) 0.296335 0.027198 10.89544 0.0000 MA(6) -0.075689 0.013945 -5.427549 0.0000 MA(8) 0.654681 0.020088 32.59034 0.0000

Mean dependent var 0.000244 S.D. dependent var 0.015982 S.E. of regression 0.015874 Akaike info criterion(σ) -5.446226Sum squared resid 0.625950 Schwarz criterion(σ) -5.434536

The model resulting from this estimation is the following:

rt = - 0.339 rt-3 - 0.614 rt-8 + et + 0.296 et-3 - 0.076 et-6 + 0.655 et-8

where et is the error term of the estimated ARMA process.

Now the goodness of fit of both models should be compared. Besides considering the significance level of the coefficients in both models a powerful mean of comparison are the Akaike and Schwarz criteria, which are to be minimised.

Here both criteria are very close to each other. But the slightly smaller (negative!) values of both criteria in the second case give a certain preference to the ARMA model.

Because we know from v. that r is stationary, that means integrated of order 0, this model is an ARIMA(1,0,1) for r at the same time.

Page 26: Time Series Analysis Exercises

26

ix. Rough and preliminary test of the residuals for autoregressive conditional heteroscedasticity by OLS regression

A rough check for first order autoregressive conditional heteroscedasticity (ARCH(1)) is the estimation of a regression of the squared residuals et

2 on et-12 :

Table 15: Dependent Variable: et

2 Method: Least Squares Sample (adjusted): 11 2498; observations: 2488

Variable Coefficient Std. Error t-Statistic Prob.

C 0.000208 1.18E-05 17.59238 0.0000 et-1

2 0.175073 0.019746 8.866325 0.0000

R-squared 0.030652 Mean dependent var 0.000252 Adjusted R-squared 0.030263 S.D. dependent var 0.000543 S.E. of regression 0.000535 Akaike info criterion (σ) -12.22961Sum squared resid 0.000710 Schwarz criterion (σ) -12.22493

For testing this dependence we cannot simply use the usual t-test because the t-values estimated do not meet an exact student distribution. Anyway, because here the t-values immensely exceed the 5-percent and 1-percent critical values, we can with great practical confidence assume, that there exist a highly significant relationship between et

2 and et-12 i.e. high conditional

heteroscedasticity.

x. Visualisation of conditional heteroscedasticity by moving 5-day residual variance

The existence of conditional heteroscedasticity can be visualised by smoothing the series of squared residuals that means by moving averages of the et

2 or moving variances.

Page 27: Time Series Analysis Exercises

27

.000

.001

.002

.003

.004

500 1000 1500 2000

5-day moving res idua l variance

Fig. 11:

At the figure, you can see typical clusters of higher variance, i.e. volatility, changing with intervals of lower variance. The similarity of neighbouring variances is one more indicator for conditional heteroscedasticity: On the base of knowing the variance at time t (i. e. conditionally) you can forecast the variance at time t+1.

xi. ARCH(1) model on the base of the AR(8) model

Because of the latter analytical results it would be worthwhile to estimate an ARCH(1) model. Then the conditional variance of the error et is

21101

21

2 )|E()|var( −−− +=== tttttt eeeeeh λλ ,

where practically ht2 is estimated by et

2

In the software used this is a special case of the more general GARCH model. An ARCH(1) there corresponds to a GARCH(0,1) model. We assume here the time series to follow an AR(8) process as estimated earlier without ARCH:

Page 28: Time Series Analysis Exercises

28

Table 16: Dependent Variable: r Method: ML – ARCH Sample (adjusted): 10 2498; observations: 2489 Included after adjustments GARCH = C(4) + C(5)*RESID(-1)^2

Coefficient Std. Error z-Statistic Prob.

AR(5) -0.045702 0.013304 -3.435162 0.0006 AR(6) -0.046538 0.014090 -3.303005 0.0010 AR(8) 0.044716 0.013812 3.237367 0.0012

Variance Equation

C 0.000194 4.67E-06 41.55146 0.0000 RESID(-1)^2 0.243443 0.026156 9.307385 0.0000

R-squared 0.004705 Mean dependent var 0.000244 Adjusted R-squared 0.003102 S.D. dependent var 0.015982 S.E. of regression 0.015958 Akaike info criterion(σ) -5.493524Sum squared resid 0.632539 Schwarz criterion(σ) -5.481833

The coefficients differ from the earlier estimated ones because here they are estimated simultaneously with the ARCH term. The newly estimated model is:

rt = -0.0457 rt-5 - 0.0465 rt-5 + 0.0447 rt-8 + et , with the error process et

2 = 0.000194 + 0.2434 et-12 + at

where at should be a pure random series. The AR representation of the error process gives the opportunity to forecast the volatility.

Page 29: Time Series Analysis Exercises

29

xii. GARCH(1,1) model on the base of the simple model r=const.

Here you should model the return rate as a real GARCH process. The generalized autoregressive conditional heteroscedasticity model (GARCH (p,q)) describes a process where the conditional error variance on all information Ωt-1 available at time t )Ω(uh 1tt

2t −= var

is assumed to obey an ARMA(p,q) model: 22

222

1122

1102 ...... qtqttptptt uuuhhh −−−−− +++++++= βββααα

with tu being the error process .

Table 17: Dependent Variable: r Method: ML - ARCH (Marquardt) - Normal distribution Sample (adjusted): 10 2498; observations: 2489 Convergence achieved after 12 iterations GARCH = C(5) + C(6)*RESID(-1)^2 + C(7)*GARCH(-1)

Coefficient Std. Error z-Statistic Prob.

C 0.000710 0.000246 2.882234 0.0039 rt-5 -0.029571 0.020063 -1.473904 0.1405 rt-5 -0.036495 0.020360 -1.792478 0.0731 rt-8 0.012575 0.020570 0.611309 0.5410

Variance Equation

C 2.01E-06 4.03E-07 4.980196 0.0000 RESID(-1)^2 0.082160 0.009593 8.564982 0.0000 GARCH(-1) 0.911339 0.009476 96.17172 0.0000

R-squared 0.003340 Mean dependent var 0.000244 Adjusted R-squared 0.000930 S.D. dependent var 0.015982 S.E. of regression 0.015975 Akaike info criterion(σ) -5.752055Sum squared resid 0.633406 Schwarz criterion (σ) -5.735688

While in the original AR(8) model the constant could be omitted here the constant is the only significant term in the regression part of the model. Therefore all lagged r terms can be omitted now:

Page 30: Time Series Analysis Exercises

30

Table 18: Dependent Variable: r Method: ML – ARCH Sample (adjusted): 2 2498; observations: 2497 Convergence achieved after 12 iterations GARCH = C(2) + C(3)*RESID(-1)^2 + C(4)*GARCH(-1)

Coefficient Std. Error z-Statistic Prob.

C 0.000662 0.000240 2.753134 0.0059

Variance Equation

C 2.08E-06 4.06E-07 5.116231 0.0000 RESID(-1)^2 0.083192 0.009602 8.664398 0.0000 GARCH(-1) 0.910103 0.009445 96.35969 0.0000

R-squared -0.000681 Mean dependent var 0.000245 Adjusted R-squared -0.001886 S.D. dependent var 0.015961 S.E. of regression 0.015977 Akaike info criterion(σ) -5.756824Sum squared resid 0.636337 Schwarz criterion(σ) -5.747496

The model proves very simple: DAX return equals the constant 0,00066 (i.e. 0,06 % per day on an average) with sort of an ARMA(1,1) conditional variance that describes the development of volatility or risk:

ht2 = 2,08 · !0-06 + 0,9101 ht-1

2 + 0.08319 et-12 + at

with 2th being the conditional variance of rt on base of the information by time t, and et being

the deviation of r from its mean in this model.

In the meaning of Akaike and Schwarz criteria, the model fits better than all considered before: The values of these criteria are the lower ones despite the model is extremely simple. The model shows that conditional variance as a measure of volatility and investment risk is highly determined by the variance of the last day, i.e. rather by the more theoretical conditional variance ht-1

2 than by the directly measurable deviation et-1.