Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues...

34
Lyapunov Operator Let A F n×n be given, and define a linear operator L A : C n×n C n×n as L A (X ) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible - the qualitative results still hold). Let {λ 1 2 ,...,λ n } be the eigenvalues of A. In general, these are complex numbers. Let V C n×n be the matrix of eigenvectors for A , so V =[v 1 v 2 ··· v n ] C n×n is invertible, and for each i A v i = ¯ λ i v i Let w i C n be defined so that V 1 = w T 1 w T 2 . . . w T n Since V 1 V = I n , we have that w T i v j = δ ij := 0 if i = j 1 if i = j For each 1 i, j n define X ij := v i v j C n×n 130

Transcript of Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues...

Page 1: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

Lyapunov Operator

Let A ∈ Fn×n be given, and define a linear operator LA : Cn×n → Cn×n as

LA (X) := A∗X + XA

Suppose A is diagonalizable (what follows can be generalized even if this is

not possible - the qualitative results still hold). Let {λ1, λ2, . . . , λn} be the

eigenvalues of A. In general, these are complex numbers. Let V ∈ Cn×n be

the matrix of eigenvectors for A∗, so

V = [v1 v2 · · · vn] ∈ Cn×n

is invertible, and for each i

A∗vi = λivi

Let wi ∈ Cn be defined so that

V −1 =

wT1

wT2...

wTn

Since V −1V = In, we have that

wTi vj = δij :=

0 if i 6= j

1 if i = j

For each 1 ≤ i, j ≤ n define

Xij := viv∗j ∈ Cn×n

130

Page 2: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

Eigenvalues Lyapunov Operator

Lemma 63 The set {Xij}i=1,.,n;j=1,.,n is a linearly independent set, and this

set is a full set of eigenvectors for the linear operator LA. Moreover, the

eigenvalues of LA is the set of complex numbers{

λi + λj

}

i=1,.,n;j=1,.,n

Proof: Suppose {αij}i=1,.,n;j=1,.,n are scalars and

n∑

j=1

n∑

i=1

αijXij = 0n×n (7.19)

Premultiply this by wTl and postmultiply by wk

0 = wTl

n∑

j=1

n∑

i=1

αijXij

wk

=n∑

j=1

n∑

i=1

αijwTl viv

∗j wk

=n∑

j=1

n∑

i=1

αijδliδkj

= αlk

which holds for any 1 ≤ k, l ≤ n. Hence, all of the α’s are 0, and the set

{Xij}i=1,.,n;j=1,.,n is indeed a linearly independent set. Also, by definition

of LA, we have

LA (Xij) = A∗Xij + XijA

= A∗viv∗j + viv

∗jA

= λiviv∗j + vi

(

λjv∗j

)

=(

λi + λj

)

viv∗j

=(

λi + λj

)

Xij

So, Xij is indeed an eigenvector of LA with eigenvalue(

λi + λj

)

. ♯

131

Page 3: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

Lyapunov Operator Alternate Invertibility Proof

Do a Schur decomposition of A, namely A = QΛQ∗, with Q unitary and Λ

upper triangular. Then the equation A∗X + XA = R can be written as

A∗X + XA = R ↔ QΛ∗Q∗X + XQΛQ∗ = R

↔ Λ∗Q∗XQ + Q∗XQΛ = Q∗RQ

↔ Λ∗X + XΛ = R

Since Q is invertible, the redefinition of variables is an invertible transfor-

mation.

Writing out all of the equations (we are trying to get that for every R, there

is a unique X satisfying the equation) yields a upper triangular system with

n2 unknowns, and the values λi + λj on the diagonal. Do it. ♯

This is an easy (and computationally motivated) proof that the Lyapunov

operator LA is an invertible operator if and only if λi + λj 6= 0 for all eigen-

values of A.

132

Page 4: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

Lyapunov Operator Integral Formula

If A is stable (all eigenvalues have negative real parts) then(

λi + λj

)

6= 0

for all combinations. Hence LA is an invertible linear operator. We can

explicitly solve the equation

LA (X) = −Q

for X using an integral formula.

Theorem 64 Let A ∈ Fn×n be given, and suppose that A is stable. The LA

is invertible, and for any Q ∈ Fn×n, the unique X ∈ Fn×n solving LA (X) =

−Q is given by

X =∫ ∞

0eA∗τQeAτdτ

Proof: For each t ≥ 0, define

S(t) :=∫ t

0eA∗τQeAτdτ

Since A is stable, the integrand is made up of decaying exponentials, and

S(t) has a well-defined limit as t→∞, which we have already denoted

X := limt→∞

S(t). For any t ≥ 0,

A∗S(t) + S(t)A =∫ t

0

(

A∗eA∗τQeAτ + eA∗τQeAτA)

=∫ t

0

d

(

eA∗τQeAτ)

= eA∗tQeAt −Q

Taking limits on both sides gives

A∗X + XA = −Q

as desired. ♯

133

Page 5: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

Lyapunov Operator More

A few other useful facts

Theorem 65 Suppose A, Q ∈ Fn×n are given. Assume that A is stable and

Q = Q∗ ≥ 0. Then the pair (A, Q) is observable if and only if

X :=∫ ∞

0eA∗τQeAτdτ > 0

Proof: ⇐ Suppose not. Then there is an xo ∈ Fn, xo 6= 0 such that

QeAtxo = 0

for all t ≥ 0. Consequently x∗oeA∗tQeAtxo = 0 for all t ≥ 0. Integrat-

ing gives

0 =∫∞0

(

x∗oeA∗tQeAtxo

)

dt

= x∗o(∫∞

0 eA∗tQeAtdt)

xo

= x∗oXxo

Since xo 6= 0, X is not positive definite.

⇒ Suppose that X is not positive definite (note by virtue of the defini-

tion X is at least positive semidefinite). Then there exists xo ∈ Fn,

xo 6= 0 such that x∗oXxo = 0. Using the integral form for X (since

A is stable by assumption) and the fact tha Q ≥ 0 gives∫ ∞

0

∥∥∥∥Q

1

2eAτxo

∥∥∥∥ dτ = 0

The integrand is nonnegative and continuous, hence it must be 0

for all τ ≥ 0. Hence QeAτxo = 0 for all τ ≥ 0. Since xo 6= 0, this

implies that (A, Q) is not observable. ♯

134

Page 6: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

Lyapunov Operator More

Theorem 66 Let (A, C) be detectable and suppose X is any solution to

A∗X +XA = −C∗C (there is no apriori assumption that LA is an invertible

operator, hence there could be multiple solutions). Then X ≥ 0 if and only

if A is stable.

Proof ⇒ Suppose not, then there is a v ∈ Cn, v 6= 0, λ ∈ C, Re (λ) ≥ 0

with Av = λv (and v∗A∗ = λv∗). Since (A, C) is detectable and the

eigenvalue in the closed right-half plane Cv 6= 0. Note that

−‖Cv‖2 = −v∗C∗Cv

= v∗ (A∗X + XA) v

=(

λ + λ)

v∗Xv

= 2Re (λ) v∗Xv

Since ‖Cv‖ > 0 and Re (λ) ≥ 0, we must actually have Re (λ) > 0

and v∗Xv < 0. Hence X is not positive semidefinite.

⇐ Since A is stable, there is only one solution to the equation A∗X +

XA = −C∗C, moreover it is given by the integral formula,

X =∫ ∞

0eA∗τC∗CeAτdτ

which is clearly positive semidefinite. ♯

135

Page 7: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

Lyap Ordering

Finally, some simple results about the ordering of Lyapunov solutions.

Let A be stable, so LA is guaranteed to be invertible. Use L−1A (−Q) to

denote the unique solution X to the equation A∗X + XA = −Q.

Lemma 67 If Q∗1 = Q1 ≥ Q2 = Q∗2, then

L−1A (−Q1) ≥ L−1

A (−Q2)

Lemma 68 If Q∗1 = Q1 > Q2 = Q∗2, then

L−1A (−Q1) > L−1

A (−Q2)

Proofs Let X1 and X2 be the two solutions, obviously

A∗X1 + X1A = −Q1

A∗X2 + X2A = −Q2

A∗ (X1 −X2) + (X1 −X2)A = − (Q1 −Q2)

Since A is stable, the integral formula applies, and

X1 −X2 =∫ ∞

0eA∗τ (Q1 −Q2) eAτdτ

Obviously, if Q1−Q2 ≥ 0, then X1−X2 ≥ 0, which proves Lemma 5. If

Q1−Q2 > 0, then the pair (A, Q1 −Q2) is observable, and by Theorem

3, X1 −X2 > 0, proving Lemma 6. ♯

Warning: The converses of Lemma 5 and Lemma 6 are NOT true.

136

Page 8: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

Algebraic Riccati Equation

The algebraic Riccati equation (A.R.E.) is

ATX + XA + XRX −Q = 0 (7.20)

where A, R, Q ∈ Rn×n are given matrices, with R = RT , and Q = QT , and

solutions X ∈ Cn×n are sought. We will primarily be interested in solutions

X of (7.20), which satisfy

A + RX is Hurwitz

and are called stabilizing solutions of the Riccati equation.

Associated with the data A, R, and Q, we define the Hamiltonian matrix

H ∈ R2n×2n by

H :=

A R

Q −AT

(7.21)

137

Page 9: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

Invariant Subspace Definition

Suppose A ∈ Cn×n. Then (via matrix-vector multiplication), we view A as

a linear map from Cn → Cn.

A subspace V ⊂ Cn is said to be “A-invariant” if for every vector v ∈ V, the

vector Av ∈ V too.

In terms of matrices, suppose that the columns of W ∈ Cn×m are a basis for

V. In other words, they are linearly independent, and span the space V, so

V = {Wη : η ∈ Cm}

Then V being A-invariant is equivalent to the existence of a matrix S ∈ Cm×m

such that

AW = WS

The matrix S is the matrix representation of A restricted to V, referred to

the basis W . Clearly, since W has linearly independent columns, the matrix

S is unique.

If (Q, Λ) are the Schur decomposition of A, then for any k with 1 ≤ k ≤ n,

the first k columns of Q span a k-dimensional invariant subspace of A, as

seen below

A [Q1 Q2] = [Q1 Q2]

Λ11 Λ12

0 Λ22

Clearly, AQ1 = Q1Λ11.

138

Page 10: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

Eigenvectors in Invariant subspaces

Note that since S ∈ Cm×m has at least one eigenvector, say ξ 6= 0m, with

associated eigenvalue λ, it follows that Wξ ∈ V is an eigenvector of A,

A (Wξ) = AWξ = WSξ = λWξ = λ(Wξ)

Hence, every nontrivial invariant subspace of A contains an eigen-

vector of A.

139

Page 11: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

Stable Invariant Subspace Definition

If S is Hurwitz, then V is a “stable, A-invariant” subspace.

Note that while the matrix S depends on the particular choice of the basis for

V (ie., the specific columns of W ), the eigenvalues of S are only dependent

on the subspace V, not the basis choice.

To see this, suppose that the columns of W ∈ Cn×m span the same space as

the columns of W .

Obviously, there is a matrix T ∈ Cm×m such that W = WT . The linear

independence of the columns of W (and W ) imply that T is invertible. Also,

AW = AWT−1 = WST−1 = W TST−1︸ ︷︷ ︸

S

Clearly, the eigenvalues of S and S are the same.

The linear operator A|V is the transformation from V → V defined as A,

simply restricted to V.

Since V is A-invariant, this is well-defined. Clearly, S is the matrix repre-

sentation of this operator, using the columns of W as the basis choice.

140

Page 12: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

ATX + XA + XRX −Q = 0 Invariant Subspace

The first theorem gives a way of constructing solutions to (7.20) in terms of

invariant subspaces of H.

Theorem 69 Let V ⊂ C2n be a n-dimensional invariant subspace of H (H

is a linear map from C2n to C2n), and let X1 ∈ Cn×n, X2 ∈ Cn×n be two

complex matrices such that

V = span

X1

X2

If X1 is invertible, then X := X2X−11 is a solution to the algebraic Riccati

equation, and spec (A + RX) = spec (H|V).

Proof: Since V is H invariant, there is a matrix Λ ∈ Cn×n with

A R

Q −AT

X1

X2

=

X1

X2

Λ (7.22)

which gives

AX1 + RX2 = X1Λ (7.23)

and

QX1 −ATX2 = X2Λ (7.24)

Pre and post multiplying equation (7.23) by X2X−11 and X−1

1 respec-

tively gives(

X2X−11

)

A +(

X2X−11

)

R(

X2X−11

)

= X2ΛX−11 (7.25)

and postmultiplying 7.24 by X−11 gives

Q− AT(

X2X−11

)

= X2ΛX−11 (7.26)

Combining (7.25) and (7.26) yields

AT (X2X−11 ) + (X2X

−11 )A + (X2X

−11 )R(X2X

−11 )−Q = 0

so X2X−11 is indeed a solution. Equation (7.23) implies that A+R

(

X2X−11

)

=

X1ΛX−11 , and hence they have the same eigenvalues. But, by definition,

Λ is a matrix representation of the map H|V , so the theorem is proved.

141

Page 13: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

ATX + XA + XRX −Q = 0 Invariant Subspace

The next theorem shows that the specific choice of matrices X1 and X2 which

span V is not important.

Theorem 70 Let V, X1, and X2 be as above, and suppose there are two

other matrices X1, X2 ∈ Cn×n such that the columns of

X1

X2

also span

V. Then X1 is invertible if and only if X1 is invertible. Consequently, both

X2X−11 and X2X

−11 are solutions to (7.20), and in fact they are equal.

Remark: Hence, the question of whether or not X1 is invertible is a property

of the subspace, and not of the particular basis choice for the subspace.

Therefore, we say that an n-dimensional subspace V has the invertibility

property if in any basis, the top portion is invertible. In the literature,

this is often called the complimentary property.

Proof: Since both sets of columns span the same n-dimensional subspace,

there is an invertible K ∈ Cn×n such that

X1

X2

=

X1

X2

K

Obviously, X1 is invertible if and only if X1 is invertible, and X2X−11 =

(

X2K) (

X1K)−1

= X2X−11 . ♯.

142

Page 14: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

ATX + XA + XRX −Q = 0 Invariant Subspace

As we would hope, the converse of theorem 69 is true.

Theorem 71 If X ∈ Cn×n is a solution to the A.R.E., then there exist

matrices X1, X2 ∈ Cn×n, with X1 invertible, such that X = X2X−11 and the

columns of

X1

X2

span an n-dimensional invariant subspace of H.

Proof: Define Λ := A + RX ∈ Cn×n. Multiplying this by X gives XΛ =

XA+XRX = Q−ATX, the second equality coming from the fact that

X is a solution to the Riccati equation. We write these two relations as

A R

Q −AT

In

X

=

In

X

Λ

Hence, the columns of

In

X

span an n-dimensional invariant subspace

of H, and defining X1 := In, and X2 := X completes the proof. ♯.

Hence, there is a one-to-one correspondence between n-dimensional invari-

ants subspaces of H with the invertibility property, and solutions of the

Riccati equation.

143

Page 15: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

ATX + XA + XRX −Q = 0 Eigenvalue distribution of H

In the previous section, we showed that any solution X of the Riccati equa-

tion is intimately related to an invariant subspace, V, of H, and the matrix

A + RX is similiar (ie. related by a similarity transformation) to the re-

striction of H on V, H|V . Hence, the eigenvalues of A + RX will always

be a subset of the eigenvalues of H. Recall the particular structure of the

Hamiltonian matrix H,

H :=

A R

Q −AT

Since H is real, we know that the eigenvalues of H are symmetric about the

real axis, including multiplicities. The added structure of H gives additional

symmetry in the eigenvalues as follows.

Lemma 72 The eigenvalues of H are symmetric about the origin (and hence

also about the imaginary axis).

Proof: Define J ∈ R2n×2n by

J =

0 −I

I 0

Note that JHJ−1 = −HT . Also, JH = −HTJ = (JH)T . Let p (λ) be

the characteristic polynomial of H. Then

p(λ) := det (λI −H)

= det(

λI − JHJ−1)

= det(

λI + HT)

= det (λI + H)

= (−1)2ndet (−λI −H)

= p(−λ)

(7.27)

Hence the roots of p are symmetric about the origin, and the symmetry

about the imaginary axis follows. ♯

144

Page 16: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

ATX + XA + XRX −Q = 0 Hermitian Solutions

Lemma 73 If H has no eigenvalues on the imaginary axis, then H has n

eigenvalues in the open-right-half plane, and n eigenvalues in the open-left-

half plane.

Theorem 74 Suppose V is a n-dimensional invariant subspace of H, and

X1, X2 ∈ Cn×n form a matrix whose colums span V. Let Λ ∈ Cn×n be a

matrix representation of H|V . If λi + λj 6= 0 for all eigenvalues λi, λj ∈spec(Λ), then X∗1X2 = X∗2X1.

Proof: Using the matrix J from the proof of Lemma 72 we have that

[X∗1 X∗2 ] JH

X1

X2

is Hermitian, and equals (−X∗1X2 + X∗2X1) Λ. Define W := −X∗1X2 +

X∗2X1. Then W = −W ∗, and WΛ + Λ∗W = 0. By the eigenvalue

assumption on Λ, the linear map LΛ :Cn×n→Cn×n defined by

LΛ (X) = XΛ + Λ∗X

is invertible, so W = 0. ♯

Corollary 75 Under the same assumptions as in theorem 74, if X1 is in-

vertible, then X2X−11 is Hermitian.

145

Page 17: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

ATX + XA + XRX −Q = 0 Real Solutions

Real (rather than complex) solutions arise when a symmetry condition is

imposed on the invariant subspace.

Theorem 76 Let V be a n-dimensional invariant subspace of H with the

invertibility property, and let X1, X2 ∈ Cn×n form a 2n × n matrix whose

columns span V. Then V is conjugate symmetric ({v : v ∈ V} = V) if and

only if X2X−11 ∈ Rn×n.

Proof: ← Define X := X2X−11 . By assumption, X ∈ Rn×n, and

span

In

X

= V

so V is conjugate symmetric.

→ Since V is conjugate symmetric, there is an invertible matrix K ∈Cn×n such that

X1

X2

=

X1

X2

K

Therefore, (X2X−11 ) = X2X

−11 = X2K(X1K)−1 = X2X

−11 as de-

sired. ♯

146

Page 18: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

ATX + XA + XRX −Q = 0 Stabilizing Solutions

Recall that for any square matrix M , there is a “largest” stable invariant

subspace. That is, there is a stable invariant subspace V∗S that contains every

other stable invariant subspace of M . This largest stable invariant subspace

is simply the span of all of the eigenvectors and generalized eigenvectors

associated with the open-left-half plane eigenvalues. If the Schur form has

the eigenvalues (on diagonal of Λ) ordered, then it is spanned by the “first”

set of columns of Q

Theorem 77 There is at most one solution X to the Riccati equation such

that A + RX is Hurwitz, and if it does exist, then X is real, and symmetric.

Proof: If X is such a stabilizing solution, then by theorem 71 it can be

constructed from some n-dimensional invariant subspace of H, V. If

A + RX is stable, then we must also have spec (H|V). Recall that the

spectrum of H is symmetric about the imaginary axis, so H has at

most n eigenvalues in◦C−, and the only possible choice for V is the

stable eigenspace of H. By theorem 70, every solution (if there are any)

constructed from this subspace will be the same, hence there is at most

1 stabilizing solution.

Associated with the stable eigenspace, any Λ as in equation (7.22) will

be stable. Hence, if the stable eigenspace has the invertibility property,

from corollary 75 we have X := X2X−11 is Hermitian. Finally, since H

is real, the stable eigenspace of H is indeed conjugate symmetric, so the

associated solution X is in fact real, and symmetric.

147

Page 19: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

ATX + XA + XRX −Q = 0 domSRic

If H is (7.21) has no imaginary axis eigenvalues, then H has a unique n-

dimensional stable invariant subspace VS ⊂ C2n, and since H is real, there

exist matrices X1, X2 ∈ Rn×n such that

span

X1

X2

= VS.

More concretely, there exists a matrix Λ ∈ Rn×n such that

H

X1

X2

=

X1

X2

Λ , spec (Λ) ⊂ ◦

C−

This leads to the following definition

Definition 78 (Definition of domSRic) Consider H ∈ R2n×2n as defined

in (7.21). If H has no imaginary axis eigenvalues, and the matrices X1, X2 ∈Rn×n for which

span

X1

X2

is the stable, n-dimensional invariant subspace of H satisfy det (X1) 6= 0,

then H ∈ domSRic. For H ∈ domSRic, define RicS (H) := X2X−11 , which is

the unique matrix X ∈ Rn×n such that A + RX is stable, and

ATX + XA + XRX −Q = 0.

Moreover, X = XT .

148

Page 20: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

ATX + XA + XRX −Q = 0 Useful Lemma

Theorem 79 Suppose H (as in (7.21)) does not have any imaginary axis

eigenvalues, R is either positive semidefinite, or negative semidefinite, and

(A, R) is a stabilizable pair. Then H ∈ domSRic.

Proof: Let

X1

X2

form a basis for the stable, n-dimensional invariant sub-

space of H (this exists, by virtue of the eigenvalue assumption about

H). Then

A R

Q −AT

X1

X2

=

X1

X2

Λ(7.28)

where Λ ∈ Cn×n is a stable matrix. Since Λ is stable, it must be that

X∗1X2 = X∗2X1. We simply need to show X1 is invertible. First we prove

that KerX1 is Λ-invariant, that is, the implication

x ∈ KerX1 → Λx ∈ KerX1

Let x ∈ KerX1. The top row of (7.28) gives RX2 = X1Λ − AX1.

Premultiply this by x∗X∗2 to get

x∗X∗2RX2x = x∗X∗2 (X1Λ−AX1)x

= x∗X∗2X1Λx

= x∗X∗1X2Λx since X∗2X1 = X∗1X2

= 0

.

Since R ≥ 0 or R ≤ 0, it follows that RX2x = 0. Now, using (7.28)

again gives X1Λx = 0, as desired. Hence KerX1 is a subspace that is

Λ-invariant. Now, if KerX1 6= {0}, then there is a vector v 6= 0 and

λ ∈ C, Re (λ) < 0 such that

X1v = 0

Λv = λv

RX2v = 0

149

Page 21: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

We also have QX1 − ATX2 = X2Λ, from the second row of (7.28).

Mutiplying by v∗ gives

v∗X∗2[

A−(

−λI)

R]

= 0

Since Re(

λ)

< 0, and (A, R) is assumed stabilizable, it must be that

X2v = 0. But X1v = 0, and

X1

X2

is full-column rank. Hence, v =

0, and KerX1 is indeed the trivial set {0}. This implies that X1 is

invertible, and hence H ∈ domSRic. ♯.

150

Page 22: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

ATX + XA + XRX −Q = 0 Quadratic Regulator

The next theorem is a main result of LQR theory.

Theorem 80 Let A, B, C be given, with (A, B) stabilizable, and (A, C)

detectable. Define

H :=

A −BBT

−CTC −AT

Then H ∈ domSRic. Moreover, 0 ≤ X := RicS (H), and

KerX ⊂

C

CA...

CAn−1

Proof: Use stabilizablity of (A, B) and detectability of (A, C) to show that

H has no imaginary axis eigenvalues. Specifically, suppose v1, v2 ∈ Cn

and ω ∈ R such that

A −BBT

−CTC −AT

v1

v2

= jω

v1

v2

Premultiply top and bottom by v∗2 and v∗1, combine, and conclude v1 =

v2 = 0. Show that if (A, B) is stabilizable, then(

A,−BBT)

is also sta-

bilizable. At this point , apply Theorem 79 to conclude H ∈ domSRic.

Then, rearrange the Riccati equation into a “Lyapunov” equation and

use Theorem 64 to show that X ≥ 0. Finally, show that KerX ⊂ KerC,

and that KerX is A-invariant. That shows that KerX ⊂ KerCAj for

any j ≥ 0. ♯.

151

Page 23: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

Derivatives Linear Maps

Recall that the derivative of f at x is defined to be the number lx such that

limδ→0

f(x + δ)− f(x)− lxδ

δ= 0

Suppose f : Rn → Rm is differentiable. Then f ′(x) is the unique matrix

Lx ∈ Rm×n which satisfies

limδ→0n

1

‖δ‖ (f(x + δ)− f(x)− Lxδ) = 0

Hence, while f : Rn → Rm, the value of f ′ at a specific location x, f ′(x),

is a linear operator from Rn → Rm. Hence the function f ′ is a map from

Rn → L (Rn,Rm).

Similarly, if f : Hn×n → Hn×n is differentiable, then the derivative of f at x

is the unique linear operator LX : Hn×n → Hn×n such that satisfying

lim∆→0n×n

1

‖∆‖ (f(X + ∆)− f(X)− LX∆) = 0

152

Page 24: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

Derivative of Riccati Lyapunov

f(X) := XDX −A∗X −XA− C

Note,

f(X + ∆) = (X + ∆)D (X + ∆)−A∗ (X + ∆)− (X + ∆)A− C

= XDX −A∗X −XA− C + ∆DX + XD∆−A∗∆−∆A + ∆D∆

= f(X)−∆ (A−DX) − (A−DX)∗∆ + ∆D∆

For any W ∈ Cn×n, let LW : Hn×n → Hn×n be the Lyapunov operator

LW (X) := W ∗X + XW . Using this notation,

f(X + ∆)− f(X)−L−A+DX (∆) = ∆D∆

Therefore

lim∆→0n×n

1

‖∆‖ [f(X + ∆)− f(X)− L−A+DX (∆)] = lim∆→0n×n

1

‖∆‖ [∆D∆] = 0

This means that the Lyaponov operator L−A+DX is the derivative of the

matrix-valued function

XDX −A∗X −XA− C

at X.

153

Page 25: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

Iterative Algorithms Roots

Suppose f : R → R is differentiable. An iterative algorithm to find a root

of f(x) = 0 is

xk+1 = xk −f(xk)

f ′(xk)

Suppose f : Rn → Rn is differentiable. Then to first order

f(x + δ) = f(x) + Lxδ

An iterative algorithm to find a “root” of f(x) = 0n is obtained by setting

the first-order approximation expression to zero, and solving for δ, yielding

the update law (with xk+1 = xk + δ)

xk+1 = xk − L−1xk

[f(xk)]

Implicitly, it is written as

Lxk[xk+1 − xk] = −f(xk) (7.29)

For the Riccati equation

f(X) := XDX −A∗X −XA− C

we have derived that the derivative of f at X involves the Lyapunov operator.

The iteration in (7.29) appears as

L−A+DXk(Xk+1 −Xk) = −XkDXk + A∗Xk + XkA + C

In detail, this is

− (A−DXk)∗ (Xk+1 −Xk)−(Xk+1 −Xk) (A−DXk) = −XkDXk+A∗Xk+XkA+C

which simplifies (check) to

(A−DXk)∗Xk+1 + Xk+1 (A−DXk) = −XkDXk − C

Of course, questions about the well-posedness and convergence of the itera-

tion must be addressed in detail.

154

Page 26: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

Comparisons Gohberg, Lancaster and Rodman

Theorems 2.1-2.3 in “On Hermitian Solutions of the Symmetric Algebraic

Riccati Equation,” by Gohberg, Lancaster and Rodman, SIAM Journal of

Control and Optimization, vol. 24, no. 6, Nov. 1986, are the basis for compar-

ing equality and inequality solutions of Riccati equations. The definitions,

theorems and main ideas are below.

Suppose that A ∈ Rn×n, C ∈ Rn×n, D ∈ Rn×n, with D = DT � 0, C = CT

and (A, D) stabilizable. Consider the matrix equation

XDX −ATX −XA− C = 0 (7.30)

Definition 81 A symmetric solution X+ = XT+ of (9.39) is maximal if

X+ � X for any other symmetric solution X of (9.39). Check: Maximal

solutions (if they exist) are unique.

Theorem 82 If there is a symmetric solution to (9.39), then there is a

maximal solution X+. The maximal solution satisfies

maxi

Reλi (A−DX+) ≤ 0

Theorem 83 If there is a symmetric solution to (9.39), then there is a

sequence {Xj}∞j=1 such that for each j, Xj = XTj and

0 � XjDXj −ATXj −XjA− C

Xj � X for any X = XT solving 9.39

A−DXj is Hurwitz

limj→∞Xj = X+

Theorem 84 If there are symmetric solutions to (9.39), then for every sym-

metric C � C, there exists symmetric solutions (and hence a maximal solu-

tion X+) to

XDX −ATX −XA− C = 0

Moreover, the maximal solutions are related by X+ � X+.

155

Page 27: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

Identities

For any Hermitian matrices Y and Y , it is trivial to verify

Y (A−DY ) + (A−DY )∗Y + Y DY

= Y (A−DY ) + (A−DY )∗Y + Y DY −(

Y − Y)

D(

Y − Y) (7.31)

Let X denote any Hermitian solution (if one exists) to the ARE,

XDX −A∗X −XA− C = 0 (7.32)

This (the existence of a hermitian solution) is the departure point for all

that will eventually follow.

For any hermitian matrix Z

−C = −XDX + A∗X + XA

= X (A−DX) + (A−DX)∗X + XDX

= X (A−DZ) + (A−DZ)∗X + ZDZ − (X − Z) D (X − Z)

(7.33)

This exploits that X solves the original Riccati equation, and uses the iden-

tity (7.31), with Y = X, Y = Z.

156

Page 28: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

Iteration

Take any C � C (which could be C, for instance). Also, let X0 = X∗0 � 0

be chosen so that A−DX0 is Hurwitz. Check: Why is this possible?

Iterate (if possible, which we will show is...) as

Xν+1 (A−DXν) + (A−DXν)∗Xν+1 = −XνDXν − C (7.34)

Note that by the Hurwitz assumption on A−DX0, at least the first iteration

is possible (ν = 0). Also, recall that the iteration is analogous to a first-order

algorithm for root finding, xn+1 := xn − f(xn)f ′(xn).

Use Z := Xν in equation (7.33), and subtract from the iteration equation

(7.34), leaving

(Xν+1 −X) (A−DXν) + (A−DXν)∗ (Xν+1 −X)

= − (X −Xν)D (X −Xν)−(

C − C) (7.35)

which is valid whenever the iteration equation is valid.

157

Page 29: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

Main Induction

Now for the induction: assume Xν = X∗ν and A − DXν is Hurwitz. Note

that this is true for ν = 0.

Since A−DXν is Hurwitz, there is a unique solution Xν+1 to equation (7.34).

Moreover, since A−DXν is Hurwitz, and the right-hand-side of (7.35) is � 0,

it follows that Xν+1 −X � 0 (note that X0 −X is not necessarily positive

semidefinite).

Rewrite (7.31) with Y = Xν+1 and Y = Xν .

Xν+1 (A−DXν+1) + (A−DXν+1)∗Xν+1 + Xν+1DXν+1

= Xν+1 (A−DXν) + (A−DXν)∗Xν+1 + XνDXν

︸ ︷︷ ︸

−C

− (Xν+1 −Xν) D (Xν+1 −Xν)

Rewriting,

−C = Xν+1 (A−DXν+1) + (A−DXν+1)∗Xν+1

+ Xν+1DXν+1 + (Xν+1 −Xν) D (Xν+1 −Xν)

Writing (7.33) with Z = Xν+1 gives

−C = X (A−DXν+1) + (A−DXν+1)∗X

+ Xν+1DXν+1 − (X −Xν+1) D (X −Xν+1)

Subtract these, yielding

−(

C − C)

= (Xν+1 −X) (A−DXν+1) + (A−DXν+1)∗ (Xν+1 −X)

+ (Xν+1 −Xν)D (Xν+1 −Xν) + (X −Xν+1)D (X −Xν+1)

(7.36)

Now suppose that Re (λ) ≥ 0, and v ∈ Cn such that (A−DXν+1) v = λv.

Pre and post-multiply (7.36) by v∗ and v respectively. This gives

−v∗(

C − C)

v︸ ︷︷ ︸

≤0

= 2Re(λ)︸ ︷︷ ︸

≥0

v∗ (Xν+1 −X) v︸ ︷︷ ︸

≥0

+

∥∥∥∥∥∥∥

D1/2 (Xν+1 −Xν)

D1/2 (X −Xν+1)

v

∥∥∥∥∥∥∥

2

︸ ︷︷ ︸

≥0

Hence, both sides are in fact equal to zero, and therefore

v∗ (Xν+1 −Xν) D (Xν+1 −Xν) = 0.

158

Page 30: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

Since D � 0, it follows that D (Xν+1 −Xν) v = 0, which means DXν+1v =

DXνv. Hence

λv = (A−DXν+1) v = (A−DXν) v

But A−DXν is Hurwitz, and Re(λ) ≥ 0, so v = 0n as desired. This means

A−DXν+1 is Hurwitz.

Summarizing:

• Suppose there exists a hermitian X = X∗ solving

XDX − A∗X −XA− C = 0

• Let X0 = X∗0 � 0 be chosen so A−DX0 is Hurwitz

• Let C be a Hermitian matrix with C � C.

• Iteratively define (if possible) a sequence {Xν}∞ν=1 via

Xν+1 (A−DXν) + (A−DXν)∗Xν+1 = −XνDXν − C

The following is true:

• Given the choice of X0 and C, the sequence {Xν}∞ν=1 is well-defined,

unique, and has Xν = X∗ν .

• For each ν ≥ 0, A−DXν is Hurwitz

• For each ν ≥ 1, Xν � X.

It remains to show that for ν ≥ 1, Xν+1 � Xν (although it is not necessarily

true that X1 � X0).

159

Page 31: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

Decreasing Proof

For ν ≥ 1, apply (7.31) with Y = Xν and Y = Xν−1, and substitute the

iteration (shifted back one step) equation (7.34) giving

Xν (A−DXν) + (A−DXν)∗Xν + XνDXν

= −C − (Xν −Xν−1) D (Xν −Xν−1)

The iteration equation is

Xν+1 (A−DXν) + (A−DXν)∗Xν+1 = −XνDXν − C

Subtracting leaves

(Xν −Xν+1) (A−DXν) + (A−DXν)∗ (Xν −Xν+1)

= (Xν −Xν−1)D (Xν −Xν−1)

The fact that A−DXν is Hurwitz and D � 0 implies that Xν −Xν+1 � 0.

Hence, we have shown that for all ν ≥ 1,

X � Xν+1 � Xν

160

Page 32: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

Monotonic Matrix Sequences Convergence

Every nonincreasing sequence of real numbers which is bounded below has

a limit. Specifically, if {xk}∞k=0 is a sequence of real numbers, and there is a

real number γ such that xk ≥ γ and xk+1 ≤ xk for all k, then there is a real

number x such that

limk→∞

xk = x

A similar result holds for symmetric matrices: Specifically, if {Xk}∞k=0 is a

sequence of real, symmetric matrices, and there is a real symmetric matrix

Γ such that Xk � Γ and Xk+1 � Xk for all k, then there is a real symmetric

matrix X such that

limk→∞

Xk = X

This is proven by first considering the sequence{

eTi Xkei

}∞k=0

, which shows

that all of the diagonal entries of the sequence have limits. Then look at{

(ei + ej)TXk(ei + ej)

}∞k=0

to conclude convergence of off-diagonal terms.

161

Page 33: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

Limit of Sequence Maximal Solution

The sequence {Xν}∞ν=0, generated by (7.34), has a limit

limν→∞Xν =: X+

Remember that C was any hermitian matrix � C.

Facts:

• Since each A − DXν is Hurwitz, in passing to the limit we may get

eigenvalues with zero real-parts, hence

maxi

Reλi (A−DX+) ≤ 0

• Since each Xν = X∗ν , it follows that X∗+ = X+.

• Since each Xν � X (where X is any hermitian solution to the equality

XDX −A∗X −XA− C = 0, equation (7.32)), it follows by passing to

the limit that X+ � X.

• The iteration equation is

Xν+1 (A−DXν) + (A−DXν)∗Xν+1 = −XνDXν − C

Taking limits yields

X+

(

A−DX+

)

+(

A−DX+

)∗X+ = −X+DX+ − C

which has a few cancellations, leaving

X+A + A∗X+ − X+DX+ + C = 0

• Finally, if X+ denotes the above limit when C = C, it follows that the

above ideas hold, so X+ � X, for all solutions X.

• And really finally, since X+ is a hermitian solution to (7.32) with C, the

properties of X+ imply that X+ � X+.

162

Page 34: Lyapunov Operator - University of California, Berkeleypack/me234/LyapunovRiccati.pdf · Eigenvalues Lyapunov Operator Lemma 63 The set {X ij} i=1,.,n;j=1,.,n is a linearly independent

Analogies with Scalar Case dx2 − 2ax− c = 0

Matrix Statement Scalar Specialization

D = D∗ � 0 d ∈ R, with d ≥ 0

(A, D) stabilizable a < 0 or d > 0

existence of Hermitian solution a2 + cd ≥ 0

Maximal solution X+ x+ = a+√

a2+cdd for d > 0

Maximal solution X+ x+ = c−2a

for d = 0

A−DX+ −√

a2 + cd for d > 0

A−DX+ a for d = 0

A−DX0 Hurwitz x0 > ad for d > 0

A−DX0 Hurwitz a < 0 for d = 0

A−DX Hurwitz x > ad

for d > 0

A−DX Hurwitz a < 0 for d = 0

If d = 0, then the graph of −2ax − c (with a < 0) is a straight line with a

positive slope. There is one (and only one) real solution at x = − c2a .

If d > 0, then the graph of dx2 − 2ax − c is a upward directed parabola.

The minimum occurs at ad . There are either 0, 1, or 2 real solutions to

dx2 − 2ax− c = 0.

• If a2 + cd < 0, then there are no solutions (the minimum is positive)

• If a2 + cd = 0, then there is one solution, at x = ad , which is where the

minimum occurs.

• If a2 + cd > 0, then there are two solutions, at x− = ad −

√a2+cd

d and

x+ = ad

+√

a2+cdd

which are equidistant from the minimum ad.

163