     • date post

23-Jan-2021
• Category

## Documents

• view

0

0

Embed Size (px)

### Transcript of Solutions to Examples on Stochastic Diï¬€erential Solutions to Examples on Stochastic...

• Solutions to Examples on

Stochastic Differential Equations

December 4, 2012

• 2

Q 1. LetW1(t) andW2(t) be two Wiener processes with correlated increments ∆W1 and ∆W2 such

that E [∆W1∆W2] = ρ∆t. Prove that E [W1(t)W2(t)] = ρ t. What is the value of E [W1(t)W2(s)]?

Let 0 = t0 < t1 < t2 < · · · < tn = t be a dissection of the interval [0, t], then

W (1) t =

n∑ k=1

∆W (1) k , W

(2) t =

n∑ k=1

∆W (2) k .

where W (1) k and W

(2) k are zero mean Gaussian deviates satisfying E[W

(1) k W

(2) k ] = ρ(tk − tk−1). Thus

E[W (1)t W (2) t ] = E

[( n∑ k=1

∆W (1) k

)( n∑ j=1

∆W (2) j

)] =

n∑ j,k=1

E [ ∆W

(1) k ∆W

(2) j

]

However, ∆W (1) k and ∆W

(2) j are independent Gaussian deviates if k ̸= j and so E

[ ∆W

(1) k ∆W

(2) j

] =

ρ(tk − tk−1)δ(k − j). Thus

E[W (1)t W (2) t ] =

n∑ k=1

ρ(tk − tk−1) = ρ(tn − t0) = ρ t .

Q 2. Let W (t) be a Wiener process and let λ be a positive constant. Show that λ−1W (λ2t) and

tW (1/t) are each Wiener processes.

This question concerns the properties of a random variable under changes of variable. We need to

show that each random variable is Gaussian distributed with mean value zero and variance t.

(a) Here t is a parameter and W (λ2t) is a Gaussian random variable with mean value zero and

variance λ2t. Let Y = λ−1W (λ2t) then

fY = fW dW

dY = λ fW = λ

1√ 2πλ2t

exp ( − W

2

2λ2t

) =

1√ 2πt

exp ( − λ

2Y 2

2λ2t

) =

1√ 2πt

exp ( − Y

2

2t

) .

Thus Y is a Gaussian deviate with mean value zero and variance t.

(b) Here t is again a parameter and W (1/t) is a Gaussian random variable with mean value zero

and variance 1/t. Let Y = tW (1/t) then

fY = fW dW

dY =

1

t fW =

1

t

1√ 2π(1/t)

exp ( − W

2

2(1/t)

) =

1√ 2πt

exp ( − (Y/t)

2

2λ2(1/t)

) =

1√ 2πt

exp ( − Y

2

2t

) .

Thus Y is a Gaussian deviate with mean value zero and variance t.

• 3

Q 3. Suppose that (ε1 , ε2) is a pair of uncorrelated N (0, 1) deviates.

(a) By recognising that ξx = σx ε1 has mean value zero and variance σ 2 x, construct a second deviate

ξy with mean value zero such that the Gaussian deviate X = [ ξx , ξy ] T has mean value zero

and correlation tensor

Ω =

 σ2x ρ σxσy ρ σxσy σ

2 y

 where σx > 0, σy > 0 and | ρ | < 1.

(b) Another possible way to approach this problem is to recognise that every correlation tensor is

similar to a diagonal matrix with positive entries. Let

α = σ2x − σ2y

2 , β =

1

2

√ (σ2x + σ

2 y)

2 − 4(1− ρ2)σ2xσ2y = √ α2 − ρ2σ2xσ2y .

Show that

Q = 1√ 2β

 √ β + α −√ β − α√ β − α

√ β + α

 is an orthogonal matrix which diagonalises Ω, and hence show how this idea may be used to

find X = [ ξx , ξy ] T with the correlation tensor Ω.

(c) Suppose that (ε1, . . . , εn) is a vector of n uncorrelated Gaussian deviates drawn from the dis-

tribution N (0, 1). Use the previous idea to construct an n-dimensional random column vector

X with correlation structure Ω where Ω is a positive definite n× n array.

(a) Let X = [ ξx , ξy ] T then ξx has variance σ

2 x and so we may write ξx = σx ε1 without any loss in

generality. The task is now to find ξy. The idea is to write ξy = α ε1+β ε2. It now follows that

E [ ξx ξy ] = E [σx ε1 (α ε1 + β ε2) ] = ασx

E [ ξy ξy ] = α 2 + β2

Therefore, choose ασx = ρ σxσy and α 2 + β2 = σ2y . Thus α = ρ σy and β

2 = σ2y(1 − ρ2). One possible vector deviate is

X = [ σx ε1 , ρ σy ε1 +

√ 1− ρ2 σy ε2

]T .

• 4

(b) To check that Q is an orthogonal matrix, it is enough to observe that the two columns of Q are

orthogonal to each other, and that each column of Q is a unit vector. Thus

QTΩQ = 1

 √ β + α √ β − α − √ β − α

√ β + α

 σ2x ρ σxσy ρ σxσy σ

2 y

 √ β + α −√ β − α√ β − α

√ β + α

 Without substituting for α and β, the computation of QTΩQ simplifies to

1

 (β + α)σ2x + (β − α)σ2y + 2ρσxσy√ β2 − α2 (σ2y − σ2x)√ β2 − α2 + 2αρσxσy (σ2y − σ2x)

√ β2 − α2 + 2αρσxσy (β + α)σ2y + (β − α)σ2x − 2ρσxσy

√ β2 − α2

 . We need to demonstrate that this is a diagonal matrix. Consider therefore

(σ2y − σ2x) √ β2 − α2 + 2αρσxσy = −2α

√ β2 − α2 + 2αρσxσy

= 2α(ρ σxσy − √ β2 − α2) .

It follows directly from the definition of β that this entry is zero. Consequently, QTΩQ is a

diagonal matrix. By noting that, first that σ2x−σ2y = 2α and, second that ρ σxσy = √ β2 − α2,

the array QTΩQ now becomes

1

 β(σ2x + σ2y) + 2α2 + 2ρσxσy√ β2 − α2 0 0 β(σ2y + σ

2 x)− 2α2 − 2ρσxσy

√ β2 − α2

 =

1

 β(σ2x + σ2y) + 2β2 0 0 β(σ2y + σ

2 x)− 2β2

 =

1

2

 σ2x + σ2y + 2β 0 0 σ2y + σ

2 x − 2β

 It is again obvious from the definition of β that σ2y+σ

2 x ≥ 2β and so the entries of this diagonal

matrix are non-negative. Let

Y = 1√ 2

[ √ σ2x + σ

2 y + 2β ε1 ,

√ σ2x + σ

2 y − 2β ε2

]T where ε1 ∼ N(0, 1) and ε2 ∼ N(0, 1), then the random variable X = QY has mean value zero and covariance tensor QTΩQ.

(c) Since Ω is a symmetric positive definite n × n array, it is possible to find an n × n orthog- onal matrix Q such that Ω = QDQT in which D is a diagonal matrix whose entries are the

eigenvalues of Ω in some order. Since Ω is positive definite, then each entry of D is positive.

Let Y = [ √ λ1 ε1, · · · ,

√ λn εn ]

T where λk > 0 is the k-th diagonal entry of D. Consider the

properties of X = QY . Clearly

E [X ] = E [QY ] = QE [Y ] = 0 ,

E [XXT ] = E [QY Y TQT ] = QE [Y Y T ]QT = QDQT = Ω .

• 5

Thus X has the required properties.

Q 4. Let X be normally distributed with mean zero and unit standard deviation, then Y = X2 is

said to be χ2 distributed with one degree of freedom.

(a) Show that Y is Gamma distribution with λ = ρ = 1/2.

(b) What is now the distribution of Z = aX2 if a > 0 and X ∼ N(0, σ2).

(b) If X1, · · · , Xn are n independent Gaussian distributed random variables with mean zero and unit standard deviation, what is the distribution of Y = XTX where X is the n dimensional

column vector with k-th entry Xk.

(a) Since Y = X2 then clearly Y ≥ 0 and so every non-zero value of Y arises from either X or −X. Therefore, the density of Y is

fY (y) = 2fX(x) dX

dY =

2√ 2π

e−x 2/2 1

2 y−1/2 =

1√ 2π

y−1/2 e−y/2 .

Evidently the distribution function for Y is the special case of the Gamma distribution in which

λ = ρ = 1/2.

(b) Since Z = aX2 with a > 0 then clearly Z ≥ 0 and so once again every non-zero value of Z arises from either X or −X. The density of Z is now

fZ(z) = 2fX(x) dX

dZ =

2√ 2π σ

e−x 2/2σ2 1

2 √ a z−1/2 =

1√ 2π

√ a σ

z−1/2 e−z/2aσ 2 .

The distribution function for Z is now the Gamma distribution with ρ = 1/2 and λ = (2aσ2)−1.

(c) If X1, · · · , Xn are each normally distributed with mean zero and unit standard deviation, then Y = X21 +X

2 2 + · · ·+X2n is likewise Gamma distributed with parameters λ = 1/2 and ρ = n/2.

This result follows from the previous example. In this case we say that Y is χ2 distributed with

n degrees of freedom.

The chi-squared distribution with n degrees of freedom plays an important role in statistical

hypothesis testing in which deviates are assigned to “bins”.

Q 5. Calculate the bounded variation (when it exists) for the functions

(a) f(x) = |x| x ∈ [−1, 2] (b) g(x) = log x x ∈ (0, 1]

(c) h(x) = 2x3 + 3x2 − 12x+ 5 x ∈ [−3, 2] (d) k(x) = 2 sin 2x x ∈ [π/12, 5π/4]

(e) n(x) = H(x)−H(x− 1) x ∈ R (f) m(x) = (sin 3x)/x x ∈ [−π/3, π/3] .

• 6