Linear Programming November 27, 2015 HOMEWORK ASSIGNMENT 4 ... · MS&E 310 Homework #4 Solution...

6

Click here to load reader

Transcript of Linear Programming November 27, 2015 HOMEWORK ASSIGNMENT 4 ... · MS&E 310 Homework #4 Solution...

Page 1: Linear Programming November 27, 2015 HOMEWORK ASSIGNMENT 4 ... · MS&E 310 Homework #4 Solution Linear Programming November 27, 2015 HOMEWORK ASSIGNMENT 4 Solution 1. Exercise 7 (b)

MS&E 310 Homework #4 Solution

Linear Programming November 27, 2015

HOMEWORK ASSIGNMENT 4 Solution

1. Exercise 7 (b) of Chapter 5 (L&Y).

Solution: See the solution to hw 3.

2. Exercise 12 of Chapter 5 (L&Y).

Let the direction (dx,dy,ds) be generated by system (14) in Chapter 5 with γ = n/(n+ ρ)

and µ = xT s/n, and let the step size be

α =θ√

min(Xs)

|(XS)−1/2( xT s(n+ρ)

1−Xs)|, (1)

where θ is a positive constant less than 1. Let

x+ = x + αdx, y+ = y + αdy, and s+ = s + αds.

Then, using Exercise 11 and the concavity of the logarithmic function show (x+,y+, s+) ∈◦F

and

ψn+ρ(x+, s+)− ψn+ρ(x, s)

≤ −θ√

min(Xs) |(Xs)−1/2(1− (n+ ρ)

xT sXs)|+ θ2

2(1− θ).

Solution: Note that dTx ds = 0 by the facts that ds = −ATdy and Adx = 0.

To prove (x+,y+, s+) ∈◦F , we should show that

x+ > 0, s+ > 0,

Ax+ = b,

c− ATy+ = s+.

The last two equalities are easily verified by the definition of dx, dy and ds. To prove the

two inequalities, from the first Newton equation

X−1/2S1/2dx + S−1/2X1/2ds = (XS)−1/2(sTx

n+ ρe−Xs).

Page 2: Linear Programming November 27, 2015 HOMEWORK ASSIGNMENT 4 ... · MS&E 310 Homework #4 Solution Linear Programming November 27, 2015 HOMEWORK ASSIGNMENT 4 Solution 1. Exercise 7 (b)

Thus,

‖X−1/2S1/2dx‖2 + 2dTx ds + ‖S−1/2X1/2ds‖2 = ‖(XS)−1/2(sTx

n+ ρe−Xs)‖2 (2)

Due to dTx ds = 0,

‖X−1/2S1/2dx‖2 + ‖S−1/2X1/2ds‖2 = ‖(XS)−1/2(xT s

n+ ρe−Xs)‖2.

Note also that ‖X−1dx‖2 = ‖(XS)−1/2X−1/2S1/2dx‖2 ≤ ‖(XS)−1/2‖2‖X−1/2S1/2dx‖2 ≤‖X−1/2S1/2dx‖2

min(Xs)and ‖S−1ds‖2 ≤ ‖S−1/2X1/2ds‖2

min(Xs), which imply

‖αX−1dx‖2 + ‖αS−1ds‖2 ≤α2‖X−1/2S1/2dx + S−1/2X1/2ds‖2

min(Xs)

=α2‖(XS)−1/2( x

T sn+ρ

e−Xs)‖2

min(Xs)= θ2 < 1.

(3)

Therefore,

x+ = x+ αdx = X(e− αX−1dx) > 0

since x > 0 and e− θX−1dx > 0. Similarly, we can prove s+ > 0.

Secondly, consider φ(x+, s+)− ψ(x, s)

ψn+ρ(x+, s+)− ψn+ρ(x, s)

= (n+ ρ) log

(1 +

αdTs x+ αdTx s

sTx

)−

n∑j=1

(log(1 +

αdsjsj

) + log(1 +αdxjxj

)

)

≤ (n+ ρ)

(αdTs x+ αdTx s

sTx

)−

n∑j=1

(log(1 +

αdsjsj

) + log(1 +αdxjxj

)

)≤ (n+ ρ)α

(dTs x+ dTx s

sTx

)− αeT (S−1ds +X−1dx) +

||αS−1ds||2 + ||αX−1dx||2

2(1− θ)

≤ (n+ ρ)α

(dTs x+ dTx s

sTx

)− αeT (S−1ds +X−1dx) +

θ2

2(1− θ)

where the last inequality follows from Exercise 11 and the concavity of the logarithmic

function and (3).

For simplicity, we let β = n+ρsT x

. Then we have

γµe =sTx

n+ ρe =

1

βe.

2

Page 3: Linear Programming November 27, 2015 HOMEWORK ASSIGNMENT 4 ... · MS&E 310 Homework #4 Solution Linear Programming November 27, 2015 HOMEWORK ASSIGNMENT 4 Solution 1. Exercise 7 (b)

The sum of the first two terms

βα(dTs x+ dTx s)− αeT (S−1ds +X−1dx)

= βαeT (Xds + Sdx)− αeT (S−1ds +X−1dx)

= α(βe− (XS)−1e)T (Xds + Sdx)

= α(βe− (XS)−1e)T (1

βe−Xs) (By Newton’s equations)

= −α(e− βXs)T (XS)−1(1

βe−Xs)

= −αβ(1

e−Xs)T (XS)−1(

1

βe−Xs)

= −αβ||(XS)−1/2(1

βe−Xs)||2

= −βθ√

min(Xs)||(XS)−1/2(1

βe−Xs)|| (By the definition of α)

= −θ√

min(Xs)||(XS)−1/2(e− βXs)||

= −θ√

min(Xs)||(XS)−1/2(e− n+ ρ

xT sXs)||

Then we get the desired inequality:

ψn+ρ(x+, s+)− ψn+ρ(x, s)

≤ −θ√

min(Xs) |(Xs)−1/2(1− (n+ ρ)

xT sXs)|+ θ2

2(1− θ).

3. Exercise 13 of Chapter 5 (L&Y).

Let v = Xs in Exercise 12. Prove√min(v) |V−1/2(1− (n+ ρ)

1Tvv)| ≥

√3/4 ,

where V is the diagonal matrix of v. Thus, the two exercises (i.e. the above inequality and

Exercise 12 of Chapter 5) imply

ψn+ρ(x+, s+)− ψn+ρ(x, s) ≤ −θ

√3/4 +

θ2

2(1− θ)= −δ

for a constant δ. One can verify that δ > 0.2 when θ = 0.4.

Solution:

3

Page 4: Linear Programming November 27, 2015 HOMEWORK ASSIGNMENT 4 ... · MS&E 310 Homework #4 Solution Linear Programming November 27, 2015 HOMEWORK ASSIGNMENT 4 Solution 1. Exercise 7 (b)

Define v = Xs and let µ = eT vn

. Then

(√

min(v)‖V −1/2(e− (n+ ρ)

eTvv)‖)2

= min(v)‖V −1/2e− (n+ ρ)

eTvv1/2‖2

= min(v)‖V −1/2e− (n+ ρ)

nµv1/2‖2

= min(v)‖V −1/2e− 1

µv1/2 − ρ

nµv1/2‖2

= min(v)‖V −1/2e− 1

µv1/2‖2 + min(v)‖ ρ

nµv1/2‖2

(since the cross term v1/2(V −1/2e− 1

µv1/2) = n− eTv

µ= 0)

≥ min(v)‖V −1/2e− 1

µv1/2‖2 + min(v)

1

µ

(since ρ ≥√n, ‖v1/2‖2 = nµ)

≥ min(v)

(1√

min(v)−√

min(v)

µ

)2

+ min(v)1

µ

= (µ−min(v)

µ)2 +

min(v)

µ

= (1− x)2 + x

= 1− x+ x2

where x = min(v)µ

. This is a quadratic function and its minimizer is x = 1/2 and has the

minimal value 3/4, which completes the proof.

4. Exercise 15 of Chapter 5 (L&Y).

Prove Theorem 1, which we restate here. Consider problems (HSDP).

(i) (HSDP) has an optimal solution and its optimal solution set is bounded.

(ii) The optimal value of (HSDP) is zero, and (y,x, τ, θ, s, κ) ∈ Fh implies that (n + 1)θ =

xT s + τκ.

(iii) There is an optimal solution (y∗,x∗, τ ∗, θ∗ = 0, s∗, κ∗) ∈ Fh such that(x∗ + s∗

τ ∗ + κ∗

)> 0,

which we call a strictly self-complementary solution.

4

Page 5: Linear Programming November 27, 2015 HOMEWORK ASSIGNMENT 4 ... · MS&E 310 Homework #4 Solution Linear Programming November 27, 2015 HOMEWORK ASSIGNMENT 4 Solution 1. Exercise 7 (b)

Solution:

(i) It is easy to verify (based on the skew symmetry of the coefficient matrix) that the dual of

(HSDP), denoted as (HLD), has the same form as (HSDP). Since both (HSDP) and (HLD)

are feasible, the optimal value of (HSDP) is finite and by strong duality (HSDP) has an

optimal solution.

To prove the boundedness of the optimal solution set, we first note that (HSDP) has a strict

feasible solution y = 0, x = 1, τ = 1, θ = 1, s = 1, κ = 1 (recall that (HSDP) is constructed in

the way such that this point is feasible), which also means that (HLD) has a strictly feasible

solution. Now that (HSDP) is feasible and the feasible region of its dual has a non-empty

interior, (HSDP) must have a bounded optimal solution set (this is a reverse direction of the

statement in Exercise 4 of Chapter 5 and the proof is similar to that of Exercise 4).

(ii) This property is a direct corollary of the expression of the primal-dual gap (recall that

for the standard LP primal and dual systems sTx = cTx− bTy). The details are given below.

Let (y,x, τ, θ, s, κ) be a feasible point for (HSDP) and (y′,x′, τ ′, θ′, s′, κ′) be a feasible solution

to (HLD). Then the primal-dual gap is

(n+ 1)(θ + θ′) = xT s′ + sTx′ + τκ′ + κτ ′.

Since (y,x, τ, θ, s, κ) is also a feasible point for (HLD), we can pick (y′,x′, τ ′, θ′, s′, κ′) =

(y,x, τ, θ, s, κ) and the above equation becomes (n + 1)θ = xT s + τκ, which is the desired

equality. At optimality, the right hand side of this relation, i.e. the primal-dual gap, should

be zero, thus at optimality θ = 0, i.e. the optimal value of (HSDP) is zero.

(iii) Note that(HLP) and (HLD) possess a strictly complementary solution pair: the primal

solution is the solution for (HLP) in which the number of positive components is maximized,

and the dual solution is the solution for (HLD) in which the number of positive components

is maximized. Since the supporting set of positive components of a strictly complementary

solution is invariant and since (HLP) and (HLD) are identical, the strictly complementary

solution (y∗,x∗, τ ∗, θ∗ = 0, s∗, κ∗) for (HLP) is also a strictly complementary solution for

(HLD) and vice versa. Thus, we establish (iii).

5. Prove Lemma 1 of Chapter 8.2 (L&Y). Let f(x) be differentiable everywhere and satisfy

the (first-order) β-Lipschitz condition. Then, for any two points x and y

f(x)− f(y)− 〈∇f(y),x− y〉 ≤ β

2|x− y|2.

Solution:

5

Page 6: Linear Programming November 27, 2015 HOMEWORK ASSIGNMENT 4 ... · MS&E 310 Homework #4 Solution Linear Programming November 27, 2015 HOMEWORK ASSIGNMENT 4 Solution 1. Exercise 7 (b)

Let d = x − y, and g(t) = f(y + td), then f(x) − f(y) = g(1) − g(0) =∫ 1

0g′(t)dt =∫ 1

0〈∇f(y + td),d〉dt. Then we have

f(x)− f(y)− 〈∇f(y),x− y〉 =

∫ 1

0

〈∇f(y + td)−∇f(y),d〉dt

≤∫ 1

0

|∇f(y + td)−∇f(y)||d|dt

≤∫ 1

0

tβ|d|2dt =β

2|d|2,

(4)

which proves the desired result. 2

Remark: Simply using the mean value theorem may not lead to the desired inequality.

6. For convex quadratic minimization, let the distinct nonzero eigenvalues of the Hessian Q

be λ1, λ2, ..., λK , and let the step size in the steepest descent method (SDM) be αk = 1λk

,

k = 1, ..., K. Then, the SDM terminates in K steps.

Solution: The algorithm proceeds as follows:

xk+1 = xk − αk∇f(xk) = xk − αk(Qxk − b), k = 0, 1, 2, · · · .

Let yk = xk − x∗ = xk −Q−1b, we have yk+1 = yk − αkQyk = yk − 1λkQyk. Therefore,

yK = (I − 1

λ1Q)(I − 1

λ2Q) . . . (I − 1

λKQ)y0.

Let h(x) = (1 − 1λ1x) . . . (1 − 1

λKx), which is a polynomial function of degree n with roots

λ1, . . . , λK .

With a bit abuse of notation, we denote h(Q) = (I − 1λ1Q)(I − 1

λ2Q) . . . (I − 1

λKQ), then

yK = h(Q)y0. We will prove that the K ×K matrix h(Q) is a zero matrix. In fact, suppose

vi is the eigenvector of Q corresponding to the eigenvalue λi, i = 1, . . . , K, then {v1, . . . , vK}is a basis of RK . Note that h(Q)vi = h(λi)vi = 0, for all i, thus h(Q) has to be a zero matrix.

This implies yK = h(Q)y0 = 0, i.e. xK = x∗. 2

6