Remarks on American Options for Jump Diffusionsstats.lse.ac.uk/xing/files/CUHK.pdf · Remarks on...
-
Upload
nguyenlien -
Category
Documents
-
view
219 -
download
0
Transcript of Remarks on American Options for Jump Diffusionsstats.lse.ac.uk/xing/files/CUHK.pdf · Remarks on...
Remarks on American Options forJump Diffusions
Hao Xing
University of Michigan
joint work withErhan Bayraktar, University of Michigan
Department of Systems Engineering & Engineering ManagementChinese University of Hong Kong, Feb. 5, 2009
Why jump models?
Pmarket(K ,T ) = PBS(S0,K ,T , σimp).
0.8 0.85 0.9 0.95 1 1.05 1.1 1.15 1.20
10
20
30
40
50
60
70
Moneyness = strike / spot index
Impl
ied
Vol
atili
ty%
Implied Volatility of S&P 500 index put option
Jan. 16, 09Dec. 30, 05
Crash Fear!
D. S. Bates [00] studied the index option data for the 87 crash:
After the crash “out of the money (OTM) put options that provideexplicit portfolio insurance against substantial downwardmovements in the market have been trading at high prices (asmeasured by implicit volatilities) . . . even more overpriced relativeto OTM calls that will pay off only if the markets risessubstantially.” —— Crash Fear
Crash Fear!
D. S. Bates [00] studied the index option data for the 87 crash:
After the crash “out of the money (OTM) put options that provideexplicit portfolio insurance against substantial downwardmovements in the market have been trading at high prices (asmeasured by implicit volatilities) . . . even more overpriced relativeto OTM calls that will pay off only if the markets risessubstantially.” —— Crash Fear
Comparing to models with stochastic volatilities, models withjumps can better explain this phenomenon.
Jump diffusions
Assume the stock price S follows jump diffusions (under a riskneutral measure calibrated from the market)
dSt = bSt−dt + σ(St−, t)St−dWt + St− · dNt∑
i=1
(eYi − 1
),
in which
◮ Nt is a Poisson process with rate λ independent of W ,
◮ Yi are iid. random variables with distribution ν(dy),
◮ b , r − λ(ξ − 1), we assume ξ , E[eYi ] < +∞ so that{e−rtSt}t≥0 is a martingale.
◮ We assume σ(S , t) ≥ ǫ > 0 for some positive constant ǫ.
Jump diffusions
Assume the stock price S follows jump diffusions (under a riskneutral measure calibrated from the market)
dSt = bSt−dt + σ(St−, t)St−dWt + St− · dNt∑
i=1
(eYi − 1
),
in which
◮ Nt is a Poisson process with rate λ independent of W ,
◮ Yi are iid. random variables with distribution ν(dy),
◮ b , r − λ(ξ − 1), we assume ξ , E[eYi ] < +∞ so that{e−rtSt}t≥0 is a martingale.
◮ We assume σ(S , t) ≥ ǫ > 0 for some positive constant ǫ.
Examples:
◮ Merton’s model: ν(dy) is the dist. of normal r.v.
◮ Kou’s model: ν(dy) is the dist. of double exponential r.v.
The American put optionThe value function of the American put option is given by
V (S , t) , supτ∈Tt,T
E
[e−r(τ−t)g(Sτ )
∣∣∣ St = S],
where g(S) = (K − S)+.
The American put optionThe value function of the American put option is given by
V (S , t) , supτ∈Tt,T
E
[e−r(τ−t)g(Sτ )
∣∣∣ St = S],
where g(S) = (K − S)+.
The domain Rn × [0,T ) can be divided into the continuation
region and the stopping region:C , {(S , t) ∈ R+ × [0, T ) : V (S , t) > g(S)} and
D , {(S , t) ∈ R+ × [0, T ) : V (S , t) = g(S)}.
The American put optionThe value function of the American put option is given by
V (S , t) , supτ∈Tt,T
E
[e−r(τ−t)g(Sτ )
∣∣∣ St = S],
where g(S) = (K − S)+.
The domain Rn × [0,T ) can be divided into the continuation
region and the stopping region:C , {(S , t) ∈ R+ × [0, T ) : V (S , t) > g(S)} and
D , {(S , t) ∈ R+ × [0, T ) : V (S , t) = g(S)}.
There exists an optimal exercise boundary s(t), such thatC = {S > s(t)} and D = {S ≤ s(t)}.
The American put optionThe value function of the American put option is given by
V (S , t) , supτ∈Tt,T
E
[e−r(τ−t)g(Sτ )
∣∣∣ St = S],
where g(S) = (K − S)+.
The domain Rn × [0,T ) can be divided into the continuation
region and the stopping region:C , {(S , t) ∈ R+ × [0, T ) : V (S , t) > g(S)} and
D , {(S , t) ∈ R+ × [0, T ) : V (S , t) = g(S)}.
There exists an optimal exercise boundary s(t), such thatC = {S > s(t)} and D = {S ≤ s(t)}.
One of the optimal exercising time τD is the hitting time of thestopping region D.
V (St, t) is a super-martingale,
V (St∧τD , t ∧ τD) is a martingale.
Parabolic integro-differential equations (PIDE)
Proposition ([Pham 95], [Yang et al. 06], [Bayraktar 08])The value function V (S , t) is the unique classical solution of
(∂t + LD − (r + λ)) V + λ
∫
R
V (S ey , t) ν(dy) = 0, S > s(t),
V (S , t) = K − S , S ≤ s(t), V (S , T ) = (K − S)+,
in which LD , 12σ2S2 ∂2
SS + b S ∂S . Moreover,
∂SV (s(t), t) = −1, and
(∂t + LD − (r + λ)) V + λ
∫
R
V (S ey , t) ν(dy) ≤ 0.
Parabolic integro-differential equations (PIDE)
Proposition ([Pham 95], [Yang et al. 06], [Bayraktar 08])The value function V (S , t) is the unique classical solution of
(∂t + LD − (r + λ)) V + λ
∫
R
V (S ey , t) ν(dy) = 0, S > s(t),
V (S , t) = K − S , S ≤ s(t), V (S , T ) = (K − S)+,
in which LD , 12σ2S2 ∂2
SS + b S ∂S . Moreover,
∂SV (s(t), t) = −1, and
(∂t + LD − (r + λ)) V + λ
∫
R
V (S ey , t) ν(dy) ≤ 0.
RemarkBecause of the nonlocal integral term, the classical finite difference
method cannot be applied directly to solve the PIDE numerically.
Numerical algorithms
Various numerical algorithms were studied in the literature:
◮ [Zhang 97]
◮ [Andersen & Andreasen 00]
◮ [Kou & Wang 04]
◮ [Hirsa & Madan 04]
◮ [Cont & Voltchkova 05]
◮ [d’ Halluin et al. 05]
· · ·
Numerical algorithms
Various numerical algorithms were studied in the literature:
◮ [Zhang 97]
◮ [Andersen & Andreasen 00]
◮ [Kou & Wang 04]
◮ [Hirsa & Madan 04]
◮ [Cont & Voltchkova 05]
◮ [d’ Halluin et al. 05]
· · ·We propose yet another numerical algorithm which
1. A slight modification of classical finite difference schemes
2. Shows the connection between jump diffusion problems anddiffusion problems
The functional operator J
Instead of the jump diffusions, we consider the GBM:
dS0t = b S0
t dt + σ S0t dWt .
The functional operator J
Instead of the jump diffusions, we consider the GBM:
dS0t = b S0
t dt + σ S0t dWt .
We introduce a functional operator J, for any f : R+ × [0, T ] → R+,
J f (S , t) , supτ∈Tt,T
E
[∫ τ
t
e−(r+λ)(u−t)λ · P f(S0
u , u)
du +
e−(r+λ)(τ−t)(K − S0
τ
)+∣∣∣S0
t = S],
in which
P f (S , u) ,
∫
R
f (ey S , u) ν(dy).
The functional operator J
Instead of the jump diffusions, we consider the GBM:
dS0t = b S0
t dt + σ S0t dWt .
We introduce a functional operator J, for any f : R+ × [0, T ] → R+,
J f (S , t) , supτ∈Tt,T
E
[∫ τ
t
e−(r+λ)(u−t)λ · P f(S0
u , u)
du +
e−(r+λ)(τ−t)(K − S0
τ
)+∣∣∣S0
t = S],
in which
P f (S , u) ,
∫
R
f (ey S , u) ν(dy).
RemarkJ f is the value function for an optimal stopping problem for the
diffusion S0.
The approximating sequenceLet us iteratively define
v0(S , t) = (K − S)+ ,
vn+1(S , t) = J vn(S , t), n ≥ 0, for (S , t) ∈ R+ × [0,T ].
The approximating sequenceLet us iteratively define
v0(S , t) = (K − S)+ ,
vn+1(S , t) = J vn(S , t), n ≥ 0, for (S , t) ∈ R+ × [0,T ].
TheoremFor each n ≥ 1, vn is the unique classical solution of the following
free boundary PDE:
(∂t + LD − (r + λ)) vn(S , t) = −λ · (P vn−1) (S , t), S > sn(t),
vn(S , t) = K − S , S ≤ sn(t), vn(S , t) > (K − S)+, S > sn(t),
∂Svn (sn(t), t) = −1, vn(S ,T ) = (K − S)+.
The approximating sequenceLet us iteratively define
v0(S , t) = (K − S)+ ,
vn+1(S , t) = J vn(S , t), n ≥ 0, for (S , t) ∈ R+ × [0,T ].
TheoremFor each n ≥ 1, vn is the unique classical solution of the following
free boundary PDE:
(∂t + LD − (r + λ)) vn(S , t) = −λ · (P vn−1) (S , t), S > sn(t),
vn(S , t) = K − S , S ≤ sn(t), vn(S , t) > (K − S)+, S > sn(t),
∂Svn (sn(t), t) = −1, vn(S ,T ) = (K − S)+.
Remark
vn(S , t) = supτ∈Tt,T
E
[e−r(τ∧σn−t) (K − Sτ∧σn)
+∣∣∣St = S
],
where σn is the n-th jump time of the Poisson process Nt .
The convergence of the sequence
Proposition
{vn}n≥0 is an monotone increasing sequence, moreover vn ≤ K.
The convergence of the sequence
Proposition
{vn}n≥0 is an monotone increasing sequence, moreover vn ≤ K.
Proof.
◮ J v0 = v1 ≥ (K − S)+ = v0,
◮ J maps positive functions to positive functions.
The statement follows from the induction.
The convergence of the sequence
Proposition
{vn}n≥0 is an monotone increasing sequence, moreover vn ≤ K.
Proof.
◮ J v0 = v1 ≥ (K − S)+ = v0,
◮ J maps positive functions to positive functions.
The statement follows from the induction.
There exists a limit v∞(S , t) , limn→+∞ vn(S , t).
The convergence of the sequence
Proposition
{vn}n≥0 is an monotone increasing sequence, moreover vn ≤ K.
Proof.
◮ J v0 = v1 ≥ (K − S)+ = v0,
◮ J maps positive functions to positive functions.
The statement follows from the induction.
There exists a limit v∞(S , t) , limn→+∞ vn(S , t).Moreover, v∞ is the fixed point of J =⇒ v∞ is the uniqueclassical solution of the PIDE.
The convergence of the sequence
Proposition
{vn}n≥0 is an monotone increasing sequence, moreover vn ≤ K.
Proof.
◮ J v0 = v1 ≥ (K − S)+ = v0,
◮ J maps positive functions to positive functions.
The statement follows from the induction.
There exists a limit v∞(S , t) , limn→+∞ vn(S , t).Moreover, v∞ is the fixed point of J =⇒ v∞ is the uniqueclassical solution of the PIDE.The verification argument =⇒ v∞ = V .
The convergence of the sequence
Proposition
{vn}n≥0 is an monotone increasing sequence, moreover vn ≤ K.
Proof.
◮ J v0 = v1 ≥ (K − S)+ = v0,
◮ J maps positive functions to positive functions.
The statement follows from the induction.
There exists a limit v∞(S , t) , limn→+∞ vn(S , t).Moreover, v∞ is the fixed point of J =⇒ v∞ is the uniqueclassical solution of the PIDE.The verification argument =⇒ v∞ = V .
Proposition
vn(S , t) ≤ V (S , t) ≤ vn(S , t) +K(1 − e−(r+λ)(T−t)
)n(
λ
λ + r
)n
.
Numerical scheme
Defining x = log S and un(x , t) = vn(S , t) for n ≥ 1,we solves the PDE satisfied by un inductively.
◮ We discretize the equation by Crank-Nicolson scheme.For fixed ∆t and ∆x such that M∆t = T andL∆x = xmax − xmin, we denote u
ℓ,mn as the solution of the
difference equation.
◮ P un−1 =∫
Run−1(x + y , t) ρ(y) dy is a convolution integral.
We evaluate it by FFT:
P un−1 =(u∧n−1 · g∧)∨
.
◮ We use Brennan-Schwartz algorithm (UI decomposition) orPSOR method to solve the free boundary problem.
Numerical scheme
Defining x = log S and un(x , t) = vn(S , t) for n ≥ 1,we solves the PDE satisfied by un inductively.
◮ We discretize the equation by Crank-Nicolson scheme.For fixed ∆t and ∆x such that M∆t = T andL∆x = xmax − xmin, we denote u
ℓ,mn as the solution of the
difference equation.
◮ P un−1 =∫
Run−1(x + y , t) ρ(y) dy is a convolution integral.
We evaluate it by FFT:
P un−1 =(u∧n−1 · g∧)∨
.
◮ We use Brennan-Schwartz algorithm (UI decomposition) orPSOR method to solve the free boundary problem.
Complexity: Let the grid point in x and t be O(N),
◮ Iterating Brennan-Schwartz: O(N2 log N),
◮ Iterating PSOR: O(N5/2).
Convergence of the numerical solutions
Proposition
1. un converges to u∞ uniformly, moreover
maxℓ,m
(uℓ,m∞ − uℓ,m
n
)≤
(1 − ηM
)n(
λ
λ + r
)n
K ,
where η = 11+(λ+r)∆t
∈ (0, 1).
2. ∣∣∣uℓ,m∞ − u(xℓ,m∆t)
∣∣∣ → 0, as ∆x ,∆t,∆y → 0.
The convergence is quadratic.
3. The numerical scheme is stable.
Numerical experiments
Table: American options in Kou’s model
η1 = η2 = 25, p = 0.6, S0 = 100.The accuracy of Amin price is up tp a penny. KPW 5EXP price is from
[Kou et al. 05], “B-S” stands for the Brennan-Schwartz.The computations are done on the same kind of computer and
run time is in seconds.Parameter Values Amin KPW 5EXP Proposed AlgorithmK T σ λ Price Value Error Time Value Error B-S PSOR
Time Time90 0.25 0.2 3 0.75 0.74 -0.01 3.21 0.75 0 0.13 0.2090 0.25 0.3 3 1.92 1.92 0 2.40 1.93 0.01 0.15 0.2790 0.25 0.2 7 1.03 1.02 -0.01 3.18 1.03 0 0.20 0.28
100 0.25 0.2 3 3.78 3.77 -0.01 3.08 3.78 0 0.15 0.20100 0.25 0.3 3 5.63 5.63 0 2.44 5.63 0 0.21 0.25100 0.25 0.2 7 4.26 4.26 0 3.48 4.27 0.01 0.28 0.2090 1 0.2 3 2.91 2.90 -0.01 2.43 2.92 -0.01 1.05 1.3090 1 0.3 3 5.79 5.79 0 2.48 5.77 -0.02 1.17 1.57
Numerical experiments cont.
Table: American options in Merton model
K=100, T=0.25, r=0.05, σ = 0.15, λ = 0.1. Stock price has lognor-mal jump distribution with µ = −0.9 and σ = 0.45. dFLV comes from[D’Halluin et al. 05]. “B-S” stands for the Brennan-Schwartz.
S(0) dFLV Proposed AlgorithmValue Error LU(B-S) Time PSOR Time
90 10.004 10.004 0 0.18 0.24100 3.241 3.242 0.001110 1.420 1.420 0
Numerical experiments cont.
Figure: Iteration of the price functions: vn(S , 0) ↑ V (S , 0), S ≥ 0.
90 92 94 96 98 100 102 104 106 108 1100
1
2
3
4
5
6
7
8
9
10
Stock Price ( S )
Opt
ion
Price
( K − S )+
v1 ( S , 0 )
v2 ( S , 0 )
v3 ( S , 0 )
vi ( S , 0 ), i = 4, 5, 6
Numerical experiments cont.
Figure: Iteration of the Exercise Boundary: sn(t) ↓ s(t), t ∈ [0, T ). Bothsn(t) and s(t) will converge to S∗ < K as t → T . K = 100.
0 0.05 0.1 0.15 0.2 0.2585
90
95
100
t ( in years )
Stoc
k Pr
ice
( S )
Continuation Region
Stopping Region87.588
86.189
85.857
85.802
s1 ( t )
s2 ( t )
s3 ( t )
si ( t ) , i = 4, 5, 6
S*approx
=
98.223
Numerical experiments cont.
Figure: Iteration of the Exercise Boundary: sn(t) ↓ s(t), t ∈ [0, T ). Bothsn(t) and s(t) will converge to S∗ < K as t → T . K = 100.
0 0.05 0.1 0.15 0.2 0.2585
90
95
100
t ( in years )
Stoc
k Pr
ice
( S )
Continuation Region
Stopping Region87.588
86.189
85.857
85.802
s1 ( t )
s2 ( t )
s3 ( t )
si ( t ) , i = 4, 5, 6
S*approx
=
98.223
Coefficients satisfies r < λ∫ ∞0 (ey − 1) ν(dy).
American options in Black-Scholes modelLet us assume that the stock price is governed by
dSt = (r − q)Stdt + σStdWt , q ≥ 0 is the dividend.
American options in Black-Scholes modelLet us assume that the stock price is governed by
dSt = (r − q)Stdt + σStdWt , q ≥ 0 is the dividend.
The value of the finite horizon American put option is defined by
V (S , t) = supτ∈S0,T−t
E[e−rτ (K − Sτ )
+∣∣S0 = S
].
American options in Black-Scholes modelLet us assume that the stock price is governed by
dSt = (r − q)Stdt + σStdWt , q ≥ 0 is the dividend.
The value of the finite horizon American put option is defined by
V (S , t) = supτ∈S0,T−t
E[e−rτ (K − Sτ )
+∣∣S0 = S
].
V (S , t) satisfies the following free boundary problem
∂tV +1
2σ2S2 ∂2
SSV + (r − q)S ∂SV − rV = 0, S > s(t)
V (S , t) = K − S , S ≤ s(t)
∂SV (s(t), t) = −1
V (S ,T ) = (K − S)+.
Regularity of the value function and the boundary
l
T t
s(t)
0
C
D
S
It is well known that ([Karatzas & Shreve 98],[Peskir 05])
V (S , t) ∈ C 2,1(C), V (S , t) ∈ C 2,1(D),
limS↓s(t)
1
2σ2S2 ∂2
SSV ≥ rK − qs(t) > 0, t < T ,
s(T−) = K , r ≥ q s(T−) =r
qK , r < q,
moreover, s(t) is continuous on[0,T ).
Regularity of the value function and the boundary
l
T t
s(t)
0
C
D
S
It is well known that ([Karatzas & Shreve 98],[Peskir 05])
V (S , t) ∈ C 2,1(C), V (S , t) ∈ C 2,1(D),
limS↓s(t)
1
2σ2S2 ∂2
SSV ≥ rK − qs(t) > 0, t < T ,
s(T−) = K , r ≥ q s(T−) =r
qK , r < q,
moreover, s(t) is continuous on[0,T ).
Question: Is s(t) locally Lipschitz or even C 1 ?
Regularity of the value function and the boundary
l
T t
s(t)
0
C
D
S
It is well known that ([Karatzas & Shreve 98],[Peskir 05])
V (S , t) ∈ C 2,1(C), V (S , t) ∈ C 2,1(D),
limS↓s(t)
1
2σ2S2 ∂2
SSV ≥ rK − qs(t) > 0, t < T ,
s(T−) = K , r ≥ q s(T−) =r
qK , r < q,
moreover, s(t) is continuous on[0,T ).
Question: Is s(t) locally Lipschitz or even C 1 ?
Answer: Yes! Moreover, s(t) is C∞ [Chen & Chadam 07].
American options for jump diffusionsThe dynamics of stock price St follows jump diffusions.
Proposition ([Pham 95], [Yang et al. 06], [Bayraktar 08])The value function V (S , t) is the unique classical solution of
(∂t + LD − (r + λ)) V + λ
∫
R
V (S ey , t) ν(dy) = 0, S > s(t),
V (S , t) = K − S , S ≤ s(t), V (S , T ) = (K − S)+,
in which LD · , 12σ2S2 ∂2
SS + b S ∂S . Moreover,
∂SV (s(t), t) = −1, and
(∂t + LD − (r + λ)) V + λ
∫
R
V (S ey , t) ν(dy) ≤ 0.
American options for jump diffusionsThe dynamics of stock price St follows jump diffusions.
Proposition ([Pham 95], [Yang et al. 06], [Bayraktar 08])The value function V (S , t) is the unique classical solution of
(∂t + LD − (r + λ)) V + λ
∫
R
V (S ey , t) ν(dy) = 0, S > s(t),
V (S , t) = K − S , S ≤ s(t), V (S , T ) = (K − S)+,
in which LD · , 12σ2S2 ∂2
SS + b S ∂S . Moreover,
∂SV (s(t), t) = −1, and
(∂t + LD − (r + λ)) V + λ
∫
R
V (S ey , t) ν(dy) ≤ 0.
Proposition ([Friedman 75], [Yang et al. 06])
limS↓s(t)
∂tV (S , t) = 0, t ∈ [0,T ).
American options for jump diffusionsThe dynamics of stock price St follows jump diffusions.
Proposition ([Pham 95], [Yang et al. 06], [Bayraktar 08])The value function V (S , t) is the unique classical solution of
(∂t + LD − (r + λ)) V + λ
∫
R
V (S ey , t) ν(dy) = 0, S > s(t),
V (S , t) = K − S , S ≤ s(t), V (S , T ) = (K − S)+,
in which LD · , 12σ2S2 ∂2
SS + b S ∂S . Moreover,
∂SV (s(t), t) = −1, and
(∂t + LD − (r + λ)) V + λ
∫
R
V (S ey , t) ν(dy) ≤ 0.
Proposition ([Friedman 75], [Yang et al. 06])
limS↓s(t)
∂tV (S , t) = 0, t ∈ [0,T ).
Probabilistic arguments?
Behavior of the free boundaryContinuity of the free boundary: [Pham 95], [Yang et al. 06],[Lamberton & Mikou 08].
Behavior of the free boundaryContinuity of the free boundary: [Pham 95], [Yang et al. 06],[Lamberton & Mikou 08].
Differentiability of the free boundary: [Yang et al. 06] proveds(t) ∈ C 1[0,T ) under the condition
r ≥ q + λ
∫
R+
(ez − 1)ν(dz). (C)
Behavior of the free boundaryContinuity of the free boundary: [Pham 95], [Yang et al. 06],[Lamberton & Mikou 08].
Differentiability of the free boundary: [Yang et al. 06] proveds(t) ∈ C 1[0,T ) under the condition
r ≥ q + λ
∫
R+
(ez − 1)ν(dz). (C)
[Levendorskii 04], [Yang et al. 06], [Lamberton & Mikou 08]showed that
s(T−) , limt↑T
s(t) = min{K ,S0} =
{K (C) holds ,
S∗ (C) fails ,
where S∗ is the unique solution of the integral equation
I0(S) , qS − rK + λ
∫
R
(S ey − K )+ν(dy) = 0.
Main results
Our goals:Extend regularity properties of the free boundary s(t) to the casewhere (C) fails.Our approach treats both cases simultaneously.
Main results
Our goals:Extend regularity properties of the free boundary s(t) to the casewhere (C) fails.Our approach treats both cases simultaneously.
Let us change back to the x variablex = log S , u(x , t) = V (S ,T − t) and b(t) = log s(T − t).We will state our main results in these variables.
Main results
Our goals:Extend regularity properties of the free boundary s(t) to the casewhere (C) fails.Our approach treats both cases simultaneously.
Let us change back to the x variablex = log S , u(x , t) = V (S ,T − t) and b(t) = log s(T − t).We will state our main results in these variables.
Theoremb(t) ∈ H
58 ([ǫ.T0]).
Main results
Our goals:Extend regularity properties of the free boundary s(t) to the casewhere (C) fails.Our approach treats both cases simultaneously.
Let us change back to the x variablex = log S , u(x , t) = V (S ,T − t) and b(t) = log s(T − t).We will state our main results in these variables.
Theoremb(t) ∈ H
58 ([ǫ.T0]).
Theoremb(t) ∈ C 1(0,T ].
Main resuts cont.
TheoremAssume ν has a density, i.e. ν(dz) = ρ(z)dz. Let α ∈ (0, 1
2 ). If
ρ(z) satisfies∫ u
−∞ ρ(z)dz ∈ H2α(R−), then b(t) ∈ H32+α([ǫ,T ]).
If ρ(z) ∈ Hℓ−1+2α(R−) for ℓ ≥ 1, then b(t) ∈ H32+ ℓ
2+α([ǫ,T ]).
Main resuts cont.
TheoremAssume ν has a density, i.e. ν(dz) = ρ(z)dz. Let α ∈ (0, 1
2 ). If
ρ(z) satisfies∫ u
−∞ ρ(z)dz ∈ H2α(R−), then b(t) ∈ H32+α([ǫ,T ]).
If ρ(z) ∈ Hℓ−1+2α(R−) for ℓ ≥ 1, then b(t) ∈ H32+ ℓ
2+α([ǫ,T ]).
Corollary
If ρ(z) ∈ C∞(R−) with dℓ
dzℓ ρ(z) bounded in R− for each ℓ ≥ 1,then b(t) ∈ C∞(0,T ].
Main resuts cont.
TheoremAssume ν has a density, i.e. ν(dz) = ρ(z)dz. Let α ∈ (0, 1
2 ). If
ρ(z) satisfies∫ u
−∞ ρ(z)dz ∈ H2α(R−), then b(t) ∈ H32+α([ǫ,T ]).
If ρ(z) ∈ Hℓ−1+2α(R−) for ℓ ≥ 1, then b(t) ∈ H32+ ℓ
2+α([ǫ,T ]).
Corollary
If ρ(z) ∈ C∞(R−) with dℓ
dzℓ ρ(z) bounded in R− for each ℓ ≥ 1,then b(t) ∈ C∞(0,T ].
RemarkFree boundaries of Merton’s and Kou’s models are C∞.
Smooth-fit property
By the smooth-fit property, for any ǫ > 0, we have
1
ǫ
[∂xu(b(t), t) − ∂xu(b(t − ǫ), t − ǫ) + eb(t) − eb(t−ǫ)
]= 0.
This yields
limǫ→0
b(t) − b(t − ǫ)
ǫ=
−∂2xtu(b(t)+, t)
∂2xxu(b(t)+, t) + eb(t)
.
Smooth-fit property
By the smooth-fit property, for any ǫ > 0, we have
1
ǫ
[∂xu(b(t), t) − ∂xu(b(t − ǫ), t − ǫ) + eb(t) − eb(t−ǫ)
]= 0.
This yields
limǫ→0
b(t) − b(t − ǫ)
ǫ=
−∂2xtu(b(t)+, t)
∂2xxu(b(t)+, t) + eb(t)
.
Two critical points to remove the condition (C): for t ∈ [0,T )
1. ∂2xxu(b(t)+, t) , limx→b(t) ∂2
xxu(x , t) > −eb(t),(i.e. limS→s(t) ∂2
SSV (S , t) > 0)
2. ∂2xtu(b(t)+, t) , limx→b(t) ∂2
xtu(x , t) exists in the classicalsense and is continuous in t.
Denominator
Let us introduce the auxiliary function on R × [0,T ]:
I (x , t) , qex − rK + λ
∫
R
[u(x + z , t) + ex+z − K
]ν(dz).
◮ (∂t − L + (r + λ)) u(x , t) = −I (x , t),for x < b(t), t ∈ (0,T ],
◮ x → I (x , t) is strictly increasing,
◮ limx↓−∞ I (x , t) = −rK and limx↑+∞ I (x , t) = +∞.
Denominator
Let us introduce the auxiliary function on R × [0,T ]:
I (x , t) , qex − rK + λ
∫
R
[u(x + z , t) + ex+z − K
]ν(dz).
◮ (∂t − L + (r + λ)) u(x , t) = −I (x , t),for x < b(t), t ∈ (0,T ],
◮ x → I (x , t) is strictly increasing,
◮ limx↓−∞ I (x , t) = −rK and limx↑+∞ I (x , t) = +∞.
Let us consider the level curve B(t) , {x : I (x , t) = 0, t ∈ [0,T ]}◮ B(t) ∈ C 1(0,T ] ∩ C [0,T ],
◮ B(t) > b(t) for t ∈ (0,T ].
Denominator cont.
Corollary
∂2xxu (b(t)+, t) > −eb(t), for t ∈ (0,T ].
Proof.On the one hand, since B(t) > b(t) and x → I (x , t) is strictlyincreasing, we have I (b(t), t) < 0, for t ∈ (0,T ]. On the otherhand,
0 = limx↓b(t)
(∂t − L + (r + λ)) u(x , t)
= −1
2σ
2 limx↓b(t)
∂2
∂x2u(x , t) −
1
2σ
2e
b(t)− I (b(t), t).
The statement follows from I (b(t), t) < 0.
Numerator
Lemma∂2
xtu(b(t)+, t) is continuous in t ∈ [0,T ).
Proof.Setp 1: Use the Maximum Principle of the differential integraloperator ∂t − L + (r + λ) and a test function used in Friedman &
Shen 02] to show b(t) ∈ H58 ([ǫ,T0]).
(Key step to drop the condition (C).)
Numerator
Lemma∂2
xtu(b(t)+, t) is continuous in t ∈ [0,T ).
Proof.Setp 1: Use the Maximum Principle of the differential integraloperator ∂t − L + (r + λ) and a test function used in Friedman &
Shen 02] to show b(t) ∈ H58 ([ǫ,T0]).
(Key step to drop the condition (C).)Step 2: Since 5
8 > 12 , a generalization of the result of [Cannon et
al. 74] gives us
∂2xtu(b(t)+, t) , lim
x↓b(t)∂2
xtu(x , t)
is a continuous function with respect to t.
Higher regularityLet ξ , x − b(t), h(ξ, t) , λ
∫R
w(ξ + z, t)ν(dz) + σ∂tσ(∂2xxu − ∂xu).
w(ξ, t) = ∂tu(x , t) satisfies
∂tw − 1
2σ2∂2
ξξw −(
µ + b′(t) − 1
2σ2
)∂ξw + (r + λ)w = h(ξ, t),
b′(t) = −12σ2∂ξw(0, t)
(µ − r − λ)eb(t) + (r + λ)K − λ∫
Ru(b(t) + z, t)ν(dz)
.
Regularity bootstrapping scheme:
w ∈ H2α,α −→ h ∈ H2α,α
h ∈ H2α,α + b′ ∈ Hα =⇒ w ∈ H2α+2,α+1
=⇒ ∂ξw(0, t) ∈ Hα+ 12 =⇒ b′(t) ∈ Hα+ 1
2
α get updated to α + 12 .
(−→ needs regularity assumption on the jump density ρ, because h
is a nonlocal integral and high order derivative of w from bothcontinuation and stopping regions may not agree on the freeboundary. )
Conclusion and future research
The iterative method were also applied to
◮ Hedging problem [Kirch & Ruggaldier 04]
◮ Optimal investment problem in illiquid market [Pham &Tankov 08]The illiquidity is introduced when the asset prices are observedonly at random times.
◮ Inventory control [Bayraktar & Ludkovski 08]
Conclusion and future research cont.
Non-degenerate diffusion processes dominate pure jump process.
◮ the value function is smooth,
◮ the free boundary is smooth.
Conclusion and future research cont.
Non-degenerate diffusion processes dominate pure jump process.
◮ the value function is smooth,
◮ the free boundary is smooth.
When the diffusion component degenerate or vanish
◮ the value function is not expected to be a classical solution,
◮ Smooth-fit principle may fail:[Boyarchenko & Levendorskii 02] and [Alili & Kyprianou 05]
Study the regularity of the value function and the optimal exerciseboundary with the pure jump Levy processes ([Caffarelli & Silvestre08]).
Appendix 1: The free boundary is Holder continuous
LemmaFor any ǫ > 0, if there exists δ > 0 such that for any t1, t2satisfying ǫ ≤ t1 < t2 ≤ T0 and t2 − t1 ≤ δ
u(b(t1), t)− u(b(t1), t1)) ≤ Cǫ(t2 − t1)α, t1 ≤ t ≤ t2, 0 < α ≤ 1
then there exists δ′ ∈ (0, δ]
b(t1) − b(t2) ≤ C ′ǫ(t2 − t1)
α/2, 0 ≤ t2 − t1 ≤ δ′.
tT00 ǫ t1 t2
x
b(t)
b(t1)
u(b(t1), t)
δ
δ′
Appendix 1: The free boundary is Holder continuous
LemmaFor any ǫ > 0, if there exists δ > 0 such that for any t1, t2satisfying ǫ ≤ t1 < t2 ≤ T0 and t2 − t1 ≤ δ
u(b(t1), t)− u(b(t1), t1)) ≤ Cǫ(t2 − t1)α, t1 ≤ t ≤ t2, 0 < α ≤ 1
then there exists δ′ ∈ (0, δ]
b(t1) − b(t2) ≤ C ′ǫ(t2 − t1)
α/2, 0 ≤ t2 − t1 ≤ δ′.
Use this lemma twice, one can show b(t) is Holder continuous.
1. u(b(t1), t) − u(b(t1), t1) ≤ maxs ∂tu(b(t1), s) (t2 − t1) =⇒∃δ1, b(t1) − b(t2) ≤ C ′
1(t2 − t1)12 , 0 ≤ t2 − t1 ≤ δ1.
2. As a result, for 0 ≤ t2 − t1 ≤ δ1,∂tu(b(t1), t)−∂tu(b(t), t) ≤ C |b(t1)− b(t)| 1
2 ≤ C2(t2 − t1)14 .
Estimate for ∂tu(b(t1), t) get updated,
u(b(t1), t) − u(b(t1), t1) ≤ C2(t2 − t1)54 . Then apply the
lemma again.
Appendix 1 cont. : Proof of the lemma
tT00 ǫ t1 t2
x
b(t)
b(t1)
χ ≥ u − g
δ′
χ ≥ 0 = u − g
DLχ ≥ L(u − g)
Choose sufficiently small β and δ′
Consider the test function
χ(x) =
{[√Cǫ(t2 − t1)
α2 + β (x − b(t1))
]+}2
, b(t2) ≤ x ≤ b(t1).
We have χ(x) > 0 when x > b(t1) −√
Cǫ
β (t2 − t1)α/2.
We want to show χ(x) ≥ (u − g)(x , t), (x , t) ∈ D, for suitablychosen positive constant β and δ′ ∈ (0, δ].For any (x , t) ∈ D, since (u − g)(x , t) > 0, we have
x > b(t1) −√
Cǫ
β(t2 − t1)
α2 =⇒ b(t1) − b(t2) ≤
√Cǫ
β(t2 − t1)
α2 .
Appendix 2: B(t) > b(t), t ∈ (0, T ]Step 1: B(t) ≥ b(t) If these is a t0 ∈ (0,T ] such thatB(t0) < b(t0), since x → I (x , t) is strictly increasing, we haveI (x , t0) > 0 for all x ∈ (B(t0), b(t0)). It implies
(∂t − LD + (r + λ)) u(x , t0) = −I (x , t0) < 0, for any x ∈ (B(t0), b(t0))
which contradicts with (∂t − LD + (r + λ)) u(x , t) ≥ 0.Step 2: B(t) > b(t) If there is a t0 ∈ (0,T ] such thatB(t0) = b(t0), we have(∂t − LD + (r + λ)) (u − g)(x , t) = I (x , t) > 0, when x > B(t).
l
tT0
x
b(t)
t0
B(t)
(∂t − LD + (r + λ)) (u − g) = J(x, t) > 0
u(b(t0), t0) − g(t0) = 0
u(x, t) − g(x, t) > 0
∂x(u − g)(b(t0), t0) > 0by the Hopf Lemma
∂x(u − g)(b(t0), t0) > 0contradicts with thesmooth-fit property at(b(t0), t0).