Lecture on Parameter Estimation for Stochastic I GMM-type estimators (13.6) are consistent if the...

download Lecture on Parameter Estimation for Stochastic I GMM-type estimators (13.6) are consistent if the moments

of 42

  • date post

    19-Jul-2020
  • Category

    Documents

  • view

    0
  • download

    0

Embed Size (px)

Transcript of Lecture on Parameter Estimation for Stochastic I GMM-type estimators (13.6) are consistent if the...

  • Lecture on Parameter Estimation for Stochastic Differential Equations

    Erik Lindström

    FMS161/MASM18 Financial Statistics

    Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  • Recap

    I We are interested in the parameters θ in the Stochastic Integral Equations

    X (t) = X (0) + ∫ t

    0 µθ (s,X (s))ds +

    ∫ t 0

    σθ (s,X (s))dW (s) (1)

    Why? I Model validation I Risk management I Advanced hedging (Greeks 9.2.2 and quadratic hedging

    9.2.2.1 (P/Q))

    Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  • Recap

    I We are interested in the parameters θ in the Stochastic Integral Equations

    X (t) = X (0) + ∫ t

    0 µθ (s,X (s))ds +

    ∫ t 0

    σθ (s,X (s))dW (s) (1)

    Why? I Model validation I Risk management I Advanced hedging (Greeks 9.2.2 and quadratic hedging

    9.2.2.1 (P/Q))

    Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  • Recap

    I We are interested in the parameters θ in the Stochastic Integral Equations

    X (t) = X (0) + ∫ t

    0 µθ (s,X (s))ds +

    ∫ t 0

    σθ (s,X (s))dW (s) (1)

    Why? I Model validation I Risk management I Advanced hedging (Greeks 9.2.2 and quadratic hedging

    9.2.2.1 (P/Q))

    Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  • Recap

    I We are interested in the parameters θ in the Stochastic Integral Equations

    X (t) = X (0) + ∫ t

    0 µθ (s,X (s))ds +

    ∫ t 0

    σθ (s,X (s))dW (s) (1)

    Why? I Model validation I Risk management I Advanced hedging (Greeks 9.2.2 and quadratic hedging

    9.2.2.1 (P/Q))

    Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  • Some asymptotics

    Consider the arithmetic Brownian motion

    dX (t) = µdt + σdW (t) (2)

    The drift is estimated by computing the mean, and compensating for the sampling δ = tn+1− tn

    µ̂ = 1

    δN

    N−1

    ∑ n=0

    X (tn+1)−X (tn). (3)

    Expanding this expression reveals that the MLE is given by

    µ̂ = X (tN)−X (t0)

    tN − t0 = µ + σ

    W (tN)−W (t0) tN − t0

    . (4)

    The MLE for the diffusion (σ ) parameter is given by

    σ̂2 = 1

    δ (N−1)

    N−1

    ∑ n=0

    (X (tn+1)−X (tn)− µ̂δ )2 d→ σ2 χ

    2(N−1) N−1

    (5)

    Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  • Some asymptotics

    Consider the arithmetic Brownian motion

    dX (t) = µdt + σdW (t) (2)

    The drift is estimated by computing the mean, and compensating for the sampling δ = tn+1− tn

    µ̂ = 1

    δN

    N−1

    ∑ n=0

    X (tn+1)−X (tn). (3)

    Expanding this expression reveals that the MLE is given by

    µ̂ = X (tN)−X (t0)

    tN − t0 = µ + σ

    W (tN)−W (t0) tN − t0

    . (4)

    The MLE for the diffusion (σ ) parameter is given by

    σ̂2 = 1

    δ (N−1)

    N−1

    ∑ n=0

    (X (tn+1)−X (tn)− µ̂δ )2 d→ σ2 χ

    2(N−1) N−1

    (5)

    Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  • Some asymptotics

    Consider the arithmetic Brownian motion

    dX (t) = µdt + σdW (t) (2)

    The drift is estimated by computing the mean, and compensating for the sampling δ = tn+1− tn

    µ̂ = 1

    δN

    N−1

    ∑ n=0

    X (tn+1)−X (tn). (3)

    Expanding this expression reveals that the MLE is given by

    µ̂ = X (tN)−X (t0)

    tN − t0 = µ + σ

    W (tN)−W (t0) tN − t0

    . (4)

    The MLE for the diffusion (σ ) parameter is given by

    σ̂2 = 1

    δ (N−1)

    N−1

    ∑ n=0

    (X (tn+1)−X (tn)− µ̂δ )2 d→ σ2 χ

    2(N−1) N−1

    (5)

    Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  • Some asymptotics

    Consider the arithmetic Brownian motion

    dX (t) = µdt + σdW (t) (2)

    The drift is estimated by computing the mean, and compensating for the sampling δ = tn+1− tn

    µ̂ = 1

    δN

    N−1

    ∑ n=0

    X (tn+1)−X (tn). (3)

    Expanding this expression reveals that the MLE is given by

    µ̂ = X (tN)−X (t0)

    tN − t0 = µ + σ

    W (tN)−W (t0) tN − t0

    . (4)

    The MLE for the diffusion (σ ) parameter is given by

    σ̂2 = 1

    δ (N−1)

    N−1

    ∑ n=0

    (X (tn+1)−X (tn)− µ̂δ )2 d→ σ2 χ

    2(N−1) N−1

    (5)

    Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  • A simple method

    Many data sets are sampled at high frequency, making the bias due to discretization of the SDEs some of the schemes in Chapter 12 acceptable. The simplest discretization, the Explicit Euler method, would for the stochastic differential equation

    dX (t) = µ(t ,X (t))dt + σ(t ,X (t))dW (t) (6)

    correspond the Discretized Maximum Likelihood (DML) estimator given by

    θ̂DML = argmax θ∈Θ

    N−1

    ∑ n=1

    logφ (X (tn+1),X (tn) + µ(tn,X (tn))∆,Σ(tn,X (tn))∆)

    (7) where φ(x ,m,P) is the density for a multivariate Normal distribution with argument x , mean m and covariance P and

    Σ(t ,X (t)) = σ(t ,X (t))σ(t ,X (t))T . (8) Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  • A simple method

    Many data sets are sampled at high frequency, making the bias due to discretization of the SDEs some of the schemes in Chapter 12 acceptable. The simplest discretization, the Explicit Euler method, would for the stochastic differential equation

    dX (t) = µ(t ,X (t))dt + σ(t ,X (t))dW (t) (6)

    correspond the Discretized Maximum Likelihood (DML) estimator given by

    θ̂DML = argmax θ∈Θ

    N−1

    ∑ n=1

    logφ (X (tn+1),X (tn) + µ(tn,X (tn))∆,Σ(tn,X (tn))∆)

    (7) where φ(x ,m,P) is the density for a multivariate Normal distribution with argument x , mean m and covariance P and

    Σ(t ,X (t)) = σ(t ,X (t))σ(t ,X (t))T . (8) Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  • Consistency

    I The DMLE is generally NOT consistent. I Approximate ML estimators (13.5) are, provided enough

    computational resources are allocated I Simulation based estimators I Fokker-Planck based estimators I Series expansions.

    I GMM-type estimators (13.6) are consistent if the moments are correctly specified (which is a non-trivial problem!)

    Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  • Consistency

    I The DMLE is generally NOT consistent. I Approximate ML estimators (13.5) are, provided enough

    computational resources are allocated I Simulation based estimators I Fokker-Planck based estimators I Series expansions.

    I GMM-type estimators (13.6) are consistent if the moments are correctly specified (which is a non-trivial problem!)

    Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  • Consistency

    I The DMLE is generally NOT consistent. I Approximate ML estimators (13.5) are, provided enough

    computational resources are allocated I Simulation based estimators I Fokker-Planck based estimators I Series expansions.

    I GMM-type estimators (13.6) are consistent if the moments are correctly specified (which is a non-trivial problem!)

    Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  • Simultion based estimators

    I Discretely observed SDEs are Markov processes I Then it follows that

    pθ (xt |xs) = Eθ [pθ (xt |xτ )|F (s)] , t > τ > s (9)

    This is the Pedersen algorithm. I Improved by Durham-Gallant (2002) and Lindström (2012) I Works very well for Multivariate models! I and is easily (...) extended to Levy driven SDEs.

    Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  • Simultion based estimators

    I Discretely observed SDEs are Markov processes I Then it follows that

    pθ (xt |xs) = Eθ [pθ (xt |xτ )|F (s)] , t > τ > s (9)

    This is the Pedersen algorithm. I Improved by Durham-Gallant (2002) and Lindström (2012) I Works very well for Multivariate models! I and is easily (...) extended to Levy driven SDEs.

    Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  • Simultion based estimators

    I Discretely observed SDEs are Markov processes I Then it follows that

    pθ (xt |xs) = Eθ [pθ (xt |xτ )|F (s)] , t > τ > s (9)

    This is the Pedersen algorithm. I Improved by Durham-Gallant (2002) and Lindström (2012) I Works very well for Multivariate models! I and is easily (...) extended to Levy driven SDEs.

    Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations

  • Simultion based estimators

    I Discretely observed SDEs are Markov processes I Then it follows t