Lecture 4 - Pennsylvania State UniversityLecture 4 Two-stage stochastic linear programs Optimality...

21
Lecture 4 Two-stage stochastic linear programs Optimality Conditions February 2, 2015

Transcript of Lecture 4 - Pennsylvania State UniversityLecture 4 Two-stage stochastic linear programs Optimality...

  • Lecture 4

    Two-stage stochastic linear programs

    Optimality Conditions

    February 2, 2015

  • Uday V. Shanbhag Lecture 4

    Discrete random variables

    Consider the two-stage stochastic linear program with discrete random

    variables:

    SLP minimizex

    cTx+K∑k=1

    pkQ(x, ξk)

    subject to Ax = b, x ≥ 0,

    where Q(x, ξk) is the optimal value of SecStage(x, ξk), defined as follows:

    SecStage(x, ξk) minimizey

    d(ξk)Ty

    subject to W (ξk)y = h(ξk)− T (ξk)x, y ≥ 0.

    Stochastic Optimization 1

  • Uday V. Shanbhag Lecture 4

    From results in the previous lecture, suppose that Q(•) has a finite valuefor at least one x̃ ∈ Rn. Then from the Moreau-Rockafellar theorem, wehave that for every x0 ∈ dom Q,

    ∂Q(x0) = −K∑k=1

    pkTTKD(x0, ξk),

    where

    D(x0, ξk) , arg maxπ

    {πT(hk − Tkx0) : W Tk π ≤ qk

    }.

    Stochastic Optimization 2

  • Uday V. Shanbhag Lecture 4

    Optimality conditions

    Consider the optimization problem

    minx∈X

    f(x),

    where X ⊂ Rn and f : Rn → R̄ is an extended real-valued function.Convex case: Suppose that the function f is convex. If f(x̄) is finite at

    some point x̄ ∈ Rn then the following holds:

    f(x) ≥ f(x̄) for all x ∈ Rn ⇔ 0 ∈ ∂f(x̄).

    Suppose that X is closed and convex and f is proper∗ and convex. Consider

    a point x̄ ∈ X ∩ dom f . It follows that f̄(x) , f(x) + 1lX(x) is convex∗A function f is proper if f(x) > −∞ for all x ∈ Rn and its domain domQ is nonempty, where domQ = {x : f(x) < +∞}.

    Stochastic Optimization 3

  • Uday V. Shanbhag Lecture 4

    where

    1lX(x) =

    0, x ∈ X+∞, x 6∈ X.Then x̄ is a minimizer of

    minx∈X

    f(x),

    if and only if x̄ is a minimizer of f̄(x).

    Suppose that

    ri(X) ∩ ri(domf) 6= ∅.

    Then by the Moreau-Rockafellar theorem, we have that

    ∂f̄(x̄) = ∂f(x̄) + ∂1lX(x̄).

    From convex analysis, ∂1lX(x) = NX(x̄). Consequently, x̄ is an optimal

    Stochastic Optimization 4

  • Uday V. Shanbhag Lecture 4

    solution of the constrained problem, given by

    minx∈X

    f(x),

    if and only if

    0 ∈ ∂f(x̄) +NX(x̄).

    Stochastic Optimization 5

  • Uday V. Shanbhag Lecture 4

    Back to the optimality conditions of SLP

    Theorem 1 (Optimality conditions of SLP) Let x̄ be a point such that

    x̄ ∈ X and Q(x̄) < +∞. Then x̄ is an optimal solution of (SLP) if andonly if there exist πk ∈ D(x̄, ξk), k = 1, . . . ,K and µ ∈ Rm such that

    K∑k=1

    pkTTk πk +A

    Tµ ≤ c (1)

    x̄T

    c− K∑k=1

    pkTTk πk −ATµ

    = 0. (2)Proof: The necessary and sufficient conditions of optimality for minimizing

    this convex program are given by

    0 ∈ ∂Q(x̄) +NX(x̄),

    Stochastic Optimization 6

  • Uday V. Shanbhag Lecture 4

    where

    NX(x) ,{z : zT(x′ − x) ≤ 0, ∀x′ ∈ X

    }.

    No need for additional regularity conditions, since Q(•) and X are convexand polyhedral. It follows that

    0 ∈ c−K∑k=1

    pkTTk πk +NX(x̄).

    Next, we characterize the normal cone of this polyhedral set. Recall that

    NX(x̄) can be represented as

    NX(x̄) ,{z : zT(x− x̄) ≤ 0, Ax = b, x ≥ 0

    }.

    Stochastic Optimization 7

  • Uday V. Shanbhag Lecture 4

    But this implies that NX(x̄) can be written as

    NX(x̄) ,{z : zT(x̄− x) ≥ 0, Ax = b, x ≥ 0

    }.

    In other words, z ∈ NX(x̄) if and only if

    zT(x− x̄) ≤ 0, ∀x ∈ X.

    When X , {x : Ax = b}, we have the following sequence of equivalence

    Stochastic Optimization 8

  • Uday V. Shanbhag Lecture 4

    statements:

    z ∈ NX(x̄)⇔ zT(x− x̄) ≤ 0, ∀x ∈ X⇔ zT(x− x̄) ≤ 0, ∀x ∈ {x : Ax = b, x ≥ 0},⇔ zT(x− x̄) ≤ 0, Ax = b, x ≥ 0⇔ zT(x̄− x) ≥ 0, Ax = b, x ≥ 0.

    Since zT(x̄− x) ≥ 0 for all x satisfying Ax = b, x ≥ 0, it follows that theminimal value of the following LP, denoted by (LP), is nonnegative.

    (LP) minx

    zT(x̄− x)

    Ax = b, (µ)

    x ≥ 0. (h)

    Stochastic Optimization 9

  • Uday V. Shanbhag Lecture 4

    Claim: This LP has optimal value equal to zero.

    Proof: By the earlier sequence of arguments, f(x) , zT(x− x̄) ≥ 0 for allx ∈ X , {x : Ax = bx ≥ 0}. But there exists a feasible x = x̄ such thatf(x̄) = 0. Consequently, the optimal value of (LP) is zero and its solution

    is x̄.

    By LP duality theory, we have that there exists a (µ, h) such that (µ, h)

    satisfies

    −z −ATµ− h = 0, h ≥ 0, hT x̄ = 0.

    In effect, z = −ATµ− h and NX(x̄) can be defined as

    NX(x̄) , {−ATµ− h : h ≥ 0, hT x̄ = 0}.

    It follows that

    0 ∈ c−K∑k=1

    pkTTk πk +NX(x̄).

    Stochastic Optimization 10

  • Uday V. Shanbhag Lecture 4

    can be represented as

    0 = c−K∑k=1

    pkTTk πk −ATµ− h,

    where hT x̄ = 0 and h ≥ 0. However, h can be expressed as

    h = c−K∑k=1

    pkTTk πk −ATµ,

    Stochastic Optimization 11

  • Uday V. Shanbhag Lecture 4

    and the resulting optimality conditions reduce to

    h ≥ 0 =⇒

    c− K∑k=1

    pkTTk πk −ATµ

    ≥ 0,x̄Th = 0 =⇒ x̄T

    c− K∑k=1

    pkTTk πk −ATµ

    = 0.

    Stochastic Optimization 12

  • Uday V. Shanbhag Lecture 4

    A scenario-based deterministic equivalent

    We may also obtain these conditions for the large-scale linear programming

    formulation:

    SLP minimizex,y1,...,yK

    cTx+∑Kk=1 pkq

    Tk yk

    subject to

    Tkx+Wkyk = hk, (pkπk) k = 1, . . . ,K

    Ax = b, (µ)

    x ≥ 0,yk ≥ 0, k = 1, . . . ,K.

    Stochastic Optimization 13

  • Uday V. Shanbhag Lecture 4

    The dual of this problem is given by

    D-SLP minimizeµ,π1,...,πK

    bTµ+∑Kk=1 pkh

    Tkπk

    subject toc−ATµ−

    ∑Kk=1 pkT

    Tk πk ≥ 0,

    qk −W Tk πk ≥ 0, k = 1, . . . ,K.

    Stochastic Optimization 14

  • Uday V. Shanbhag Lecture 4

    The optimality conditions prescribed by the earlier Theorem can be stated

    in this equivalent form:

    K∑k=1

    pkTTk πk +A

    Tµ ≤ c,

    c− K∑k=1

    pkTTk πk +A

    = 0,qk −W Tk πk ≥ 0, k = 1, . . . ,K

    ȳTk(qk −W Tk πk

    )= 0, k = 1, . . . ,K

    Stochastic Optimization 15

  • Uday V. Shanbhag Lecture 4

    Optimality conditions for continuous random variables

    We begin this subsection with a specification of the subdifferential of Q(x)when ξ is a continuous random variable.

    Proposition 2 (∂Q(x) when ξ is a continuous r.v.) Suppose that the ex-pectation function Q(x) is a proper function and its domain has a nonemptyinterior. Then for any x0 ∈ dom Q,

    ∂Q(x0) = −E[T TD(x0, ξ)] +Ndom Q(x0),

    where D(x, ξ) = arg maxπ∈Π(q) πT(h − Tx). Furthermore, Q is differ-entiable at x0 if and only if x0 ∈ int(dom Q) and D(x0, ξ) is a singletonw.p.1.

    We are now prepared to derive the optimality conditions of (SLP) when ξ

    is a continuous random variable.

    Stochastic Optimization 16

  • Uday V. Shanbhag Lecture 4

    Theorem 3 (Optimality conditions of SLP) Let x̄ be a feasible solution

    of SLP. Suppose that the recourse function Q(•) is proper, int(dom Q)∩X is nonempty. Then X∩int(domQ) 3 x̄ is an optimal solution of (SLP)if and only if there exists a measurable function† π(ω) ∈ D(x̄, ξ(ω)) forevery ω ∈ Ω and a vector µ ∈ Rm such that

    E[T Tπ] +ATµ ≤ c (3)x̄T(c− E[T Tπ]−ATµ

    )= 0. (4)

    †Let (X,Σ) and (Y, ) be measurable spaces, meaning that X and Y are sets equipped with respective sigma algebras Σ andT . A function f : X → Y is said to be measurable if for every E ∈ T the preimage of E under f is in Σ; ie

    f−1(E) := {x ∈ X : f(x) ∈ E} ∈ Σ, ∀E ∈ T.

    Stochastic Optimization 17

  • Uday V. Shanbhag Lecture 4

    Proof: Recall that (SLP) requires the solution of

    SLP minimizex

    cTx+

    ,Q(x)︷ ︸︸ ︷E[Q(x, ξk)]

    subject toAx = b,

    x ≥ 0.

    This problem can be equivalently stated as

    minx∈X

    [cTx+Q(x) + 1lX(x)].

    The necessary and sufficient optimality conditions of this problem are given

    by

    0 ∈ ∂[cT x̄+Q(x̄) + 1lX(x̄)].

    Stochastic Optimization 18

  • Uday V. Shanbhag Lecture 4

    Since int(dom Q) ∩ X is nonempty, by the Moreau-Rockafellar theorem,we have that

    ∂[cT x̄+Q(x̄) + 1lX(x̄)] = c+ ∂Q(x̄) + ∂[1lX(x̄)].

    Recall, that ∂1lX(x̄) = NX(x̄). Furthermore, from Prop. 2, the optimalityof x̄ is equivalent to the existence of a measurable function π(ω) ∈D(x̄, ξ(ω)) such that

    0 ∈ c+ ∂Q(x̄) +NX(x̄), where ∂Q(x) = −E[T Tπ] +Ndom Q(x).

    It follows that

    0 ∈ c− E[T Tπ] +Ndom Q(x̄) +NX(x̄).

    Stochastic Optimization 19

  • Uday V. Shanbhag Lecture 4

    Note that since x̄ ∈ int(dom Q), we have that NdomQ(x̄) = {0}.Therefore, the optimality conditions reduce to

    0 ∈ c− E[T Tπ] +NX(x̄).

    The remainder of the proof follows from employing the structure of NX(x̄).

    Stochastic Optimization 20