Stochastic Volatility Models: Bayesian Framework

Click here to load reader

  • date post

    21-Jan-2016
  • Category

    Documents

  • view

    34
  • download

    0

Embed Size (px)

description

Stochastic Volatility Models: Bayesian Framework. Haolan Cai. Introduction. Idea: model returns using the volatility Important: must capture the persistence of the volatilities (i.e. volatility clusters) along with other characteristics - PowerPoint PPT Presentation

Transcript of Stochastic Volatility Models: Bayesian Framework

  • Stochastic Volatility Models: Bayesian FrameworkHaolan Cai

  • IntroductionIdea: model returns using the volatility

    Important: must capture the persistence of the volatilities (i.e. volatility clusters) along with other characteristics

    Use: a class of Hidden Markov Models (HMM) known as Stochastic Volatility Models (SV models)

  • Basic ModelWhere = (, v) is the parameter space for the AutoRegressive process of order 1 (i.e. linear). is the persistence of the model.

  • Transformation of ModelPrevious model is non-linear which creates complications. When we apply the following transformation: We get nice linear form:Where is the error term with the following form:

  • The Problem Child does not have a close form from which it is easy to sample. However it can be accurately approximated with a discrete mixture of normals.In this case the optimal J is equal to 7.Kim, Shephard and Chib (1998)

  • Bayesian FrameworkNow all the parameters have nice distributions from which they can be sampled using a Gibbs sampling algorithm. Use semi-informative priors (above) with parameters loosely developed from data. Imposes some but little structure to the sampling.The algorithm was ran for 500 iterations with a burn in period of 50.

  • The Problem Child (again)In order to sample we sample from the mixture of normals. This is done by a Forward Filtering, Backwards Sampling (FFBS) algorithm. A Kalman filter is applied from t = 0 to t = n. Then the states (xn, xn-1 x0) are simulated in the backwards order.

    The reasoning for this more complicated sampling measure is the high AR dependence of this type of data. is close to 1.

  • Initial ConditionsFor the mixture of normals, 7 normals are chosen to fix the log chi-squared distribution.For the other parameters, initial values were chosen to sufficiently cover the parameter space as to be semi-informative but not restrictive.For example, parameters for are g and G; where g is the mean and G the standard deviation. Here there are chosen to be 0 and 9 respectively.

  • Data1-minute prices from General Electric and Intel CorporationGE: April 9, 2007 9:35 am to Jan 24, 2008 3:59 pmUsed Daily Returns for SV model

  • Checking Autocorrelation Structure

  • Results is steady around .956

  • Results: = .0037

  • Results: = .4150

  • Results:

  • Further AnalysisTry to build in autoregressive of high order.

    Allow J, the number of normals used to fit the error term, to vary.

    What kind of predictive value does this model produce for stock returns?

    Does using higher frequency data improve predictive and/or fit value?