of 88 /88
Introduction The Simplex Method Duality Theory Applications Introduction to Linear Programming Jia Chen Nanjing University October 27, 2011 Jia Chen Introduction to Linear Programming

hoangkhanh
• Category

## Documents

• view

226

4

Embed Size (px)

### Transcript of Introduction to Linear Programming - Nanjing... IntroductionThe Simplex Method

Duality TheoryApplications

Introduction to Linear Programming

Jia Chen

Nanjing University

October 27, 2011

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

What is LP

The Linear Programming Problem

Definition

Decision variables

xj , j = 1, 2, ..., n

Objective Function

ζ =

n∑i=1

cixi

We will primarily discuss maxizming problems w.l.g.

Constraintsn∑

j=1

ajxj {≤,=,≥} b

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

What is LP

The Linear Programming Problem

Observation

Constraints can be easily converted from one form to another

Inequalityn∑

j=1

ajxj ≥ b

is equivalent to

−n∑

j=1

ajxj ≤ −b

Equalityn∑

j=1

ajxj = b

is equivalent to

n∑j=1

ajxj ≥ b

n∑j=1

ajxj ≤ b

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

What is LP

The Standard Form

Stipulation

We prefer to pose the inequalities as less-thans and let all thedecision variables be nonnegative

Standard Form of Linear Programming

Maximize ζ = c1x1 + c2x2 + ...+ cnxn

Subject to a11x1 + a12x2 + ...+ a1nxn ≤ b1a21x1 + a22x2 + ...+ a2nxn ≤ b2

...

am1x1 + am2x2 + ...+ amnxn ≤ bmx1, x2, ..., xn ≥ 0

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

What is LP

The Standard Form

Observation

The standard-form linear programming problem can be representedusing matrix notation

Standard Notation

Maximize ζ =

n∑j=1

cjxj

Subject to

n∑j=1

aijxj ≤ bi

xj ≥ 0

Matrix Notation

Maximize ζ = ~cT · ~x

Subject to A~x ≤ ~b~x ≥ ~0

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

What is LP

Solution to LP

Definition

A proposal of specific values for the decision variables is calleda solution

If a solution satisfies all constraints, it is called feasible

If a solution attains the desired maximum, it is called optimal

Infeasile Problem

ζ = 5x1 + 4x2

x1 + x2 ≤ 2

−2x1 − 2x2 ≤ −9x1, x2 ≥ 0

Unbounded Problem

ζ = x1 − 4x2

−2x1 + x2 ≤ −1−x1 − 2x2 ≤ −2

x1, x2 ≥ 0

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

OverviewAn ExampleDetailed IllustrationComplexity

How to solve it

Question

Given a specific feasible LP problem in its standard form, how tofind the optimal solution?

The Simplex Method

Start from an initial feasible solution

Iteratively modify the value of decision variables in order toget a better solution

Every intermediate solutions we get should be feasible as well

We continue this process until arriving at a solution thatcannot be improved

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

OverviewAn ExampleDetailed IllustrationComplexity

Example

Original Problem

ζ = 5x1 + 4x2 + 3x3

2x1 + 3x2 + x3 ≤ 5

4x1 + x2 + 2x3 ≤ 11

3x1 + 4x2 + 2x3 ≤ 8

x1, x2, x3 ≥ 0

Slacked Problem

ζ = 5x1 + 4x2 + 3x3

w1 = 5− 2x1 − 3x2 − x3w2 = 11− 4x1 − x2 − 2x3

w3 = 8− 3x1 − 4x2 − 2x3

x1, x2, x3, w1, w2, w3 ≥ 0

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

OverviewAn ExampleDetailed IllustrationComplexity

Example(cont.)

Initial State

x1 = 0, x2 = 0, x3 = 0

w1 = 5, w2 = 11, w3 = 8

Slacked Problem

ζ = 5x1 + 4x2 + 3x3

w1 = 5− 2x1 − 3x2 − x3w2 = 11− 4x1 − x2 − 2x3

w3 = 8− 3x1 − 4x2 − 2x3

x1, x2, x3, w1, w2, w3 ≥ 0

After first iteration

x1 =52 , x2 = 0, x3 = 0

w1 = 0, w2 = 1, w3 =12

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

OverviewAn ExampleDetailed IllustrationComplexity

Example(cont.)

Observation

Notice that w1 = x2 = x3 = 0, which indicate us that we canrepresent ζ, x1, w2, w3 in terms of w1, x2, x3

Slacked Problem

ζ = 5x1 + 4x2 + 3x3

w1 = 5− 2x1 − 3x2 − x3w2 = 11− 4x1 − x2 − 2x3

w3 = 8− 3x1 − 4x2 − 2x3

x1, x2, x3, w1, w2, w3 ≥ 0

Rewrite the Problem

ζ = 12.5− 2.5w1 − 3.5x2 + 0.5x3

x1 = 2.5− 0.5w1 − 1.5x2 − 0.5x3

w2 = 1 + 2w1 + 5x2

w3 = 0.5 + 1.5w1 + 0.5x2 − 0.5x3

x1, x2, x3, w1, w2, w3 ≥ 0

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

OverviewAn ExampleDetailed IllustrationComplexity

Example(cont.)

Current State

w1 = 0, x2 = 0, x3 = 0

x1 =52 , w2 = 1, w3 =

12

Slacked Problem

ζ = 12.5− 2.5w1 − 3.5x2 + 0.5x3

x1 = 2.5− 0.5w1 − 1.5x2 − 0.5x3

w2 = 1 + 2w1 + 5x2

w3 = 0.5 + 1.5w1 + 0.5x2 − 0.5x3

x1, x2, x3, w1, w2, w3 ≥ 0

After second iteration

w1 = 0, x2 = 0, x3 = 1

x1 = 2, w2 = 1, w3 = 0

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

OverviewAn ExampleDetailed IllustrationComplexity

Example(cont.)

Rewrite the Problem

ζ = 13− w1 − 3x2 − w3

x1 = 2− 2w1 − 2x2 + w3

w2 = 1 + 2w1 + 5x2

x3 = 1 + 3w1 + x2 − 2w3

x1, x2, x3, w1, w2, w3 ≥ 0

Current State

w1 = 0, x2 = 0, w3 = 0

x1 = 2, w2 = 1, x3 = 1

Third iteration

This time no newvariable can be found

Not only brings ourmethod to a standstill,but also proves that thecurrent solution ζ = 13is optimal!

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

OverviewAn ExampleDetailed IllustrationComplexity

Slack variables

We want to solve the following standard-form LP problem:

ζ=

n∑j=1

cjxj

n∑j=1

aijxj ≤ bi i = 1, 2, ...,m

xj ≥ 0 j = 1, 2, ..., n

Our first task is to introduce slack variables:

xn+i = wi = bi −n∑

j=1

aijxj i = 1, 2, ...,m

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

OverviewAn ExampleDetailed IllustrationComplexity

Entering variable

Initially, let N = {1, 2, ..., n} and B = {n+ 1, n+ 2, ..., n+m}.A dictionary of the current state will look like this:

ζ = ζ +∑j∈N

cjxj

xi = bi −∑j∈N

aijxj i ∈ B

Next, pick k from {j ∈ N |cj > 0}. The variable xk is calledentering variable.Once we have chosen the entering variable, its value will beincreased from zero to a positive value satisfying:

xi = bi − aikxk ≥ 0 i ∈ B

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

OverviewAn ExampleDetailed IllustrationComplexity

Leaving variable

Since we do not want any of xi go negative, the increment must be

xk = mini∈B,aik>0

{bi/aik}

After this increament of xk, there must be another leaving variablexl whose value is decreased to zero.Move k from N to B and move l from B to N , then we get abetter result and a new dictionary.

Repeat this process until no entering variable can be found.

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

OverviewAn ExampleDetailed IllustrationComplexity

How to initialize

Question

If an initial state is not avaliable, how could we obtain one?

Simplex Method: Phase I

We handle this difficulty by solving an auxiliary problem for which

A feasible dictionary is easy to find

The optimal dictionary provides a feasible dictionary for theoriginal problem

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

OverviewAn ExampleDetailed IllustrationComplexity

How to initialize(cont.)

Observation

The original problem has a feasible solution iff. the auxiliaryproblem has an optimal solution with x0 = 0

Original Problem

ζ =

n∑j=1

cjxj

n∑j=1

aijxj ≤ bi i = 1, 2, ...,m

xj ≥ 0 j = 1, 2, ..., n

Auxiliary Problem

ξ = −x0

n∑j=1

aijxj − x0 ≤ bi

i = 1, 2, ...,m

xj ≥ 0 j = 0, 1, 2, ..., n

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

OverviewAn ExampleDetailed IllustrationComplexity

Example

Original Problem

ζ = −2x1 − x2

−x1 + x2 ≤ −1−x1 − 2x2 ≤ −2

x2 ≤ 1

x1, x2 ≥ 0

Auxiliary Problem

ξ = −x0

−x1 + x2 − x0 ≤ −1−x1 − 2x2 − x0 ≤ −2

x2 − x0 ≤ 1

x0, x1, x2 ≥ 0

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

OverviewAn ExampleDetailed IllustrationComplexity

Example(cont.)

Slacked AuxiliaryProblem

ξ = −x0

w1 = −1 + x1 − x2 + x0

w2 = −2 + x1 + 2x2 + x0

w3 = 1− x2 + x0

Observation

This dictionary is infeasible, butif we let x0 enter...

After first iteration

ξ = −2 + x1 + 2x2 − w2

x0 = 2− x1 − 2x2 + w2

w1 = 1− 3x2 + w2

w3 = 3− x1 − 3x2 + w2

Observation

Now the dictionary becomefeasible!

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

OverviewAn ExampleDetailed IllustrationComplexity

Example(cont.)

Second iteration

Pick x2 to enter and w1 to leave

After second iteration

ξ = −4

3+ x1 −

2

3w1 −

1

3w2

x0 =4

3− x1 +

2

3w1 +

1

3w2

x2 =1

3− 1

3w1 +

1

3w2

w3 = 2− x1 + w1

Third iteration

Pick x1 to enter and x0 to leave

After third iteration

ξ = 0− x0

x1 =4

3− x0 +

2

3w1 +

1

3w2

x2 =1

3− 1

3w1 +

1

3w2

w3 =2

3+ x0 +

1

3w1 −

1

3w2

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

OverviewAn ExampleDetailed IllustrationComplexity

Example(cont.)

Back to the original problem

We now drop x0 from the equations and reintroduce the originalobjective function:

Oignial dictionary

ζ = −2x1 − x2 = −3− w1 − w2

x1 =4

3+

2

3w1 +

1

3w2

x2 =1

3− 1

3w1 +

1

3w2

w3 =2

3+

1

3w1 −

1

3w2

Solve this problem

This dictionary isalready optimal forthe original problem

But in general, wecannot expect to bethis lucky every time

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

OverviewAn ExampleDetailed IllustrationComplexity

Special Cases

What if we fail to get an optimal solution with x0 = 0 for theauxiliary problem?The problem is infeasible.

What if we fail to find a corresponding leaving variable afteranother one enters?The problem is unbounded.

What if the candidate of leaving variable is not unique?The dictionary is degenerate/stalled.

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

OverviewAn ExampleDetailed IllustrationComplexity

Infeasibility: Example

Original Problem

ζ = 5x1 + 4x2

x1 + x2 ≤ 2

−2x1 − 2x2 ≤ −9x1, x2 ≥ 0

Slacked AuxiliaryProblem

ξ = −x0

w1 = 2− x1 − x2 + x0

w2 = −9 + 2x1 + 2x2 + x0

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

OverviewAn ExampleDetailed IllustrationComplexity

Infeasibility: Example(cont.)

x0 enter, w2 leave

ξ = −9− w2 + 2x1 + 2x2

x0 = 9 + w2 − 2x1 − 2x2

w1 = 11− w2 − 3x1 − 3x2

x1 enter, w1 leave

ξ = −5

3− 2

3w1 −

4

3w2 − x2

x0 =5

3+

2

3w1 +

4

3w2 + x2

x1 =11

3− 1

3w1 −

1

3w2 − x2

Observation

The optimal value of ξ is lessthan zero, so the problem isinfeasible.

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

OverviewAn ExampleDetailed IllustrationComplexity

Unboundedness: Example

Original Problem

ζ = x1 − 4x2

−2x1 + x2 ≤ −1−x1 − 2x2 ≤ −2

x1, x2 ≥ 0

Observation

x1 = 0.8, x2 = 0.6 is afeasible solution

After first iteration

ζ = −1.6 + 1.2w1 − 1.2w2

x1 = 0.8 + 0.4w1 − 0.4w2

x2 = 0.6− 0.2w1 + 0.4w2

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

OverviewAn ExampleDetailed IllustrationComplexity

Unboundedness: Example(cont.)

Second Iteration

Pick w1 to enter and x2 toleave

After second iteration

ζ = 2− 6x2 + 1.2w2

x1 = 2− 2x2 + 1.2w2

w1 = 3− 5x2 + 2w2

Third Iteration

Now we can only pick w2

to enter, but this time novariable would leave.

Thus this is anunbounded problem.

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

OverviewAn ExampleDetailed IllustrationComplexity

Degeneracy: Example

Original Problem

ζ = x1 + 2x2 + 3x3

x1 + 2x3 ≤ 2

x2 + 2x3 ≤ 2

x1, x2, x3 ≥ 0

Observation

It is easy to verify thatx1 = x2 = x3 = 0 is a feasiblesolution.

Slacked Problem

ζ = x1 + 2x2 + 3x3

w1 = 2− x1 − 2x3

w2 = 2− x2 − 2x3

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

OverviewAn ExampleDetailed IllustrationComplexity

Degeneracy: Example(cont.)

First iteration

Pick x3 to enter and w1 toleave

After first iteration

ζ = 3− 0.5x1 + 2x2 − 1.5w1

x3 = 1− 0.5x1 − 0.5w1

w2 = x1 − x2 + w1

Second iteration

Notice that x2 cannot reallyincrease, but it can bereclassified.

After second iteration

ζ = 3 + 1.5x1 − 2w2 + 0.5w1

x2 = x1 − w2 + w1

x3 = 1− 0.5x1 − 0.5w1

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

OverviewAn ExampleDetailed IllustrationComplexity

Degeneracy: Example(cont.)

Third iteration

Pick x1 to enter and x3 toleave

After third iteration

ζ = 6− 3x3 − 2w2 − w1

x1 = 2− 2x3 − w1

x2 = 2− 2x3 − w2

Observation

Now we obatin the optimalsolution ζ = 6.

What typically happens

Usually one or morepivot will break awayfrom the degeneracy

However, cycling issometimes possible,regardless of the pivotingrules

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

OverviewAn ExampleDetailed IllustrationComplexity

Cycling: Example

It has been shown that if a problem has an optimal solution but byapplying simplex method we end up cycling, the problem mustinvolve dictionaries with at least 6 variables and 3 constraints.

Cycling dictionary

ζ = 10x1 − 57x2 − 9x3 − 24x4

w1 = −0.5x1 + 5.5x2 + 2.5x3 − 9x4

w2 = −0.5x1 + 1.5x2 + 0.5x3 − x4w3 = 1− x1

In practice, degeneracy is very common, but cycling is rare.

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

OverviewAn ExampleDetailed IllustrationComplexity

Cycling and termination

Theorem

If the simplex method fails to terminate, then it must cycle.

Proof

A dictionary is completely determined by specifying B and N

There are only(n+mm

)different possibilities

If the simplex method fails to terminate, it must visit some ofthese dictionaries more than once. Hence the algorithm cycles

Q.E.D.

Remark

This theorem tells us that, as bad as cycling is, nothing worse canhappen.

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

OverviewAn ExampleDetailed IllustrationComplexity

Cycling Elimination

Question

Are there pivoting rules for which the simplex method will nevercycle?

Bland’s Rule

Both the entering and the leaving variable should be selected fromtheir respective sets by choosing the variable xk with the smallestindex k.

Theorem

The simplex method always terminates provided that we choosethe entering and leaving variable according to Bland’s Rule.

Detailed proof of this theorem is omitted here.

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

OverviewAn ExampleDetailed IllustrationComplexity

Fundamental Theorem

Fundamental Theorem of Linear Programming

For an arbitrary LP, the following statements are true:

If there is no optimal solution, then the problem is eitherinfeasible or unbounded.

If a feasible solution exists, then a basic feasible solutionexists.

If an optimal solution exists, then a basic optimal solutionexists.

Proof

The Phase I algorithm either proves the problem is infeasibleor produces a basic feasible solution.

The Phase II algorithm either discovers the problem isunbounded or finds a basic optimal solution.

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

OverviewAn ExampleDetailed IllustrationComplexity

Question

Will the simplex method terminate within polynomial time?

Worst case analysis

For those non-cycling variants of the simplex method, theupper bound on the number of iteration is

(n+mm

)The expression is maximized when m=n, and it is easy toverify that

1

2n22n ≤

(2n

n

)≤ 22n

In 1972, V.Klee and G.J.Minty discovered an example whichrequries 2n − 1 iterations to solve using the largest coefficientrule

It is still an open question whether there exist pivot rules thatwould guarantee a polynonial number of iterations

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

OverviewAn ExampleDetailed IllustrationComplexity

Question

Does there exist any other algorithm that can solve linearprogramming? Will they run in polynomial time?

History of LP algorithms

The simplex algorithm was developed by G.Dantzig in 1947

Khachian in 1979 proposed the ellipsoid algorithm. This is thefirst polynomial time algorithm for LP.

In 1984, Karmarkar’s algorithm reached the worst-case boundof O(n3.5L), where the bit complexity of input is O(L).

In 1991, Y.Ye proposed an O(n3L) algorithm

Whether there exists an algorithm whose running timedepends only on m and n is still an open question.

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MotivationThe Dual ProblemDuality TheoremComplementary Slackness and Special Cases

Motivation

Original Problem

Maximize

ζ = 4x1 + x2 + 3x3

x1 + 4x2 ≤ 1

3x1 − x2 + x3 ≤ 3

x1, x2, x3 ≥ 0

Observation

Every feasible solutionprovides a lower bound on theoptimal objective functionvalue ζ∗

Example

The solution(x1, x2, x3) = (0, 0, 3) tells usζ∗ ≥ 9

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MotivationThe Dual ProblemDuality TheoremComplementary Slackness and Special Cases

Motivation(cont.)

Question

How to give an upper bound on ζ∗?

Original Problem

Maximize

ζ = 4x1 + x2 + 3x3

x1 + 4x2 ≤ 1

3x1 − x2 + x3 ≤ 3

x1, x2, x3 ≥ 0

An upper bound

Multiply the first constraintby 2 and add that to 3 timesthe second constraint:

11x1 + 5x2 + 3x3 ≤ 11

which indicates that ζ∗ ≤ 11

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MotivationThe Dual ProblemDuality TheoremComplementary Slackness and Special Cases

Motivation(cont.)

Question

Can we find a tighter upper bound?

Better upper bound

Multiply the first constraint by y1 and add that to y2 times thesecond constraint:

(y1 + 3y2)x1 + (4y1 − y2)x2 + (y2)x3 ≤ y1 + 3y2

The coefficients on the left side must be greater than thecorresponding ones in the objective function

In order to obtain the best possible upper bound, we shouldminimize y1 + 3y2

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MotivationThe Dual ProblemDuality TheoremComplementary Slackness and Special Cases

Motivation(cont.)

Observation

The new problem is the dual problem associated with the primalone.

Primal Problem

Maximize

ζ = 4x1 + x2 + 3x3

x1 + 4x2 ≤ 1

3x1 − x2 + x3 ≤ 3

x1, x2, x3 ≥ 0

Dual Problem

Minimize

ξ = y1 + 3y2

y1 + 3y2 ≥ 4

4y1 − y2 ≥ 1

y2 ≥ 3

y1, y2 ≥ 0

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MotivationThe Dual ProblemDuality TheoremComplementary Slackness and Special Cases

The Dual Problem

Primal Problem

Maximize

ζ =

n∑j=1

cjxj

n∑j=1

aijxj ≤ bi i = 1, 2, ...,m

xj ≥ 0 j = 1, 2, ..., n

Dual Problem

Minimize

ξ =

m∑i=1

biyi

m∑i=1

yiaij ≥ cj j = 1, 2, ..., n

yi ≥ 0 i = 1, 2, ...,m

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MotivationThe Dual ProblemDuality TheoremComplementary Slackness and Special Cases

The Dual Problem(Matrix Notation)

Primal Problem

Maximize

ζ = ~cT~x

A~x ≤ ~b~x ≥ ~0

Dual Problem

Minimize

ξ = ~bT~y

AT~y ≥ ~c~y ≥ ~0

Dual Problem

-Maximize

−ξ = −~bT~y

−AT~y ≤ −~c~y ≥ ~0

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MotivationThe Dual ProblemDuality TheoremComplementary Slackness and Special Cases

Weak Duality Theorem

The Weak Duality Theorem

If ~x is feasible for the primal and ~y is feasible for the dual, then

ζ = ~cT~x ≤ ~bT~y = ξ

Proof

∵ A~x ≤ ~b, ~cT ≤ (AT~y)T = ~yTA

∴ ζ = ~cT~x ≤ (~yTA)~x = ~yT(A~x) ≤ ~yT~b = ~bT~y = ξ

Q.E.D

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MotivationThe Dual ProblemDuality TheoremComplementary Slackness and Special Cases

Strong Duality Theorem

The Strong Duality Theorem

If ~x∗ is optimal for the primal and ~y∗ is optimal for the dual, then

ζ∗ = ~cT~x∗ = ~bT~y∗ = ξ∗

Proof

It suffices to exhibit a dual feasible solution ~y satisfying the aboveequationSuppose we apply the simplex method. The final dictionary will be

ζ = ζ∗ +∑j∈N

cjxj

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MotivationThe Dual ProblemDuality TheoremComplementary Slackness and Special Cases

Strong Duality Theorem(cont.)

Proof(cont.)

Let’s use ~c∗ for the objective coefficeints corresponding to originalvariables, and use ~d∗ for the objective coefficients corresponding toslack variables. The above equation can be written as

ζ = ζ∗ + ~c∗T~x+ ~d∗T ~w

Now let ~y = −~d∗, we shall show that ~y is feasible for the dualproblem and satisfy the equation.

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MotivationThe Dual ProblemDuality TheoremComplementary Slackness and Special Cases

Strong Duality Theorem(cont.)

Proof(cont.)

We write the objective function in two ways.

ζ = ~cT~x = ζ∗ + ~c∗T~x+ ~d∗T ~w

= ζ∗ + ~c∗T~x+ (−~yT)(~b−A~x)

= ζ∗ − ~yT~b+ (~c∗T + ~yTA)~x

Equate the corresponding terms on both sides:

ζ∗ − ~yT~b = 0 (1)

~cT = ~c∗T + ~yTA (2)

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MotivationThe Dual ProblemDuality TheoremComplementary Slackness and Special Cases

Strong Duality Theorem(cont.)

Proof(cont.)

Equation (1) simply shows that

~cT~x∗ = ζ∗ = ~yT~b = ~bT~y

Since the coefficient in the optimal dictionary are all non-positive,we have ~c∗ ≤ ~0 and ~d∗ ≤ ~0. Therefore, from equation (2) we knowthat

AT~y ≤ c~y ≥ ~0

Q.E.D.

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MotivationThe Dual ProblemDuality TheoremComplementary Slackness and Special Cases

Example

Observation

As the simplex method solves the primal problem, it also implicitlysolves the dual problem.

Primal Problem

Maximize

ζ = 4x1 + x2 + 3x3

x1 + 4x2 ≤ 1

3x1 − x2 + x3 ≤ 3

x1, x2, x3 ≥ 0

Dual Problem

Minimize

ξ = y1 + 3y2

y1 + 3y2 ≥ 4

4y1 − y2 ≥ 1

y2 ≥ 3

y1, y2 ≥ 0

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MotivationThe Dual ProblemDuality TheoremComplementary Slackness and Special Cases

Example(cont.)

Observation

The dual dictionary is the negative transpose of the primaldictionary.

Primal Dictionary

ζ = 4x1 + x2 + 3x3

w1 = 1− x1 − 4x2

w2 = 3− 3x1 + x2 − x3

Dual Dictionary

−ξ = −y1 − 3y2

z1 = −4 + y1 + 3y2

z2 = −1 + 4y1 − y2z3 = −3 + y2

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MotivationThe Dual ProblemDuality TheoremComplementary Slackness and Special Cases

Example(cont.)

First Iteration

Pick x3(y2) to enter and w2(z3) to leave.

Primal Dictionary

ζ = 9− 5x1 + 4x2 − 3w2

w1 = 1− x1 − 4x2

x3 = 3− 3x1 + x2 − w2

Dual Dictionary

−ξ = −9− y1 − 3z3

z1 = 5 + y1 + 3z3

z2 = −4 + 4y1 − z3y2 = 3 + z3

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MotivationThe Dual ProblemDuality TheoremComplementary Slackness and Special Cases

Example(cont.)

Second Iteration

Pick x2(y1) to enter and w1(z2) to leave. Done.

Primal Dictionary

ζ = 10− 6x1 − w1 − 3w2

x2 =0.25− 0.25x1 − 0.25w1

x3 =3.25− 3.25x1 − 0.25w1

− w2

Dual Dictionary

−ξ = −10− 0.25z2 − 3.25z3

z1 = 6 + 0.25z2 + 3.25z3

y1 = 1 + 0.25z2 + 0.25z3

y2 = 3 + z3

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MotivationThe Dual ProblemDuality TheoremComplementary Slackness and Special Cases

Complementary Slackness

Complementary Slackness Theorem

Suppose ~x is primal feasible and ~y is dual feasible. Let ~w denotethe primal slack variables, and let ~z denote the dual slack variables.Then ~x and ~y are optimal if and only if

xjzj = 0, for j = 1, 2, ..., n

wiyi = 0, for i = 1, 2, ...,m

Remark

Primal complementary slackness conditions∀j ∈ {1, 2, ..., n}, either xj = 0 or

∑mi=1 aijyi = cj

Dual complementary slackness conditions∀i ∈ {1, 2, ...,m}, either yi = 0 or

∑nj=1 aijxj = bi

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MotivationThe Dual ProblemDuality TheoremComplementary Slackness and Special Cases

Complementary Slackness(cont.)

Proof

We begin by revisiting the inequality used to prove the weakduality theorem:

ζ =

n∑j=1

cjxj = ~cT~x ≤ (~yTA)~x =

n∑j=1

(m∑i=1

yiaij

)xj

This inequality will become an equality if and only if for everyj = 1, 2, ..., n either xj = 0 or cj =

∑mi=1 yiaij , which is exactly

the primal complementary slackness condition.The same reasoning holds for dual complementary slacknessconditions.

Q.E.D.

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MotivationThe Dual ProblemDuality TheoremComplementary Slackness and Special Cases

Special Cases

Question

What can we say about the dual problem when the primal problemis infeasible/unbounded?

Possibilities

The primal has an optimal solution iff. the dual also has one

The primal is infeasible if the dual is unbounded

The dual is infeasible if the primal is unbounded

However...

It turns out that there is a fourth possibility: both the primal andthe dual are infeasible

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MotivationThe Dual ProblemDuality TheoremComplementary Slackness and Special Cases

Fourth Possibility: Example

Primal Problem

Maximize

ζ = 2x1 − x2

x1 − x2 ≤ 1

−x1 + x2 ≤ −2x1, x2 ≥ 0

Dual Problem

Minimize

ξ = y1 − 2y2

y1 − y2 ≥ 2

−y1 + y2 ≥ −1y1, y2 ≥ 0

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Flow networks

Flow Network

A flow network is a directed graph G = (V,E) with twodistinguished vertices: a source s and a sink t. Each edge(u, v) ∈ E has a nonnegative capacity c(u, v). If (u, v) /∈ E, thenc(u, v) = 0.

Positive Flow

A positive flow on G is a function f : V × V → R satisfying:

Capacity constraint∀u, v ∈ V, 0 ≤ f(u, v) ≤ c(u, v)

Flow conservation∀u ∈ V − {s, t},

∑v∈V f(u, v)−

∑v∈V f(v, u) = 0

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Flow networks(cont.)

Example Value

The value of the flow isthe net flow out of thesource:∑v∈V

f(s, v)−∑v∈V

f(v, s)

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Flow networks(cont.)

The maximum flow problem

Given a flow network G, find a flow of maximum value on G.

LP of MaxFlow

Maximizeζ = ~es

T~x

A~x = ~0

~x ≤ ~c~x ≥ ~0

Remarks

A is a |E| × |V | matrix containing only0,1 and -1 where A((u, v), u) = −1and A((u, v), v) = 1.Specially, we letA((s, v), ∗) = A((v, t), ∗) = 0

~es is a 0-1 vector where if (s, v) ∈ Ethen e(s,v) = 1.

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Flow networks(cont.)

Example

Explanation

~x =(f(s,a), f(s,c), f(a,b), f(b,t), f(c,b), f(c,t))T

A′ =

−1 −1 0 0 0 01 0 −1 0 0 00 0 1 −1 −1 00 1 0 0 1 −10 0 0 1 0 1

~es =(1, 1, 0, 0, 0, 0)T

~c =(3, 2, 1, 3, 4, 2)T

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Flow and Cut

Cut and Capacity

A cut S is a set of nodes that contains the source node but doesnot contains the sink node.The capacity of a cut is defined as

∑u∈S,v /∈S c(u, v).

The minimum cut Problem

Given a flow network G, find a cut of minimum capacity on G.

The max-flow min-cut theorem

In a flow network, the maximum value of a flow equals theminimum capacity of a cut.

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Proof of the theorem

First, let’s find the dual of the max-flow problem.

LP of MaxFlow

Maximizeζ = ~es

T~x

A~x = ~0

~x ≤ ~c~x ≥ ~0

Rewrite the LP

Maximizeζ = ~es

T~x

A−AI

~x ≤ ~0~0~c

~x ≥ ~0

Dual Problem

Minimize

ξ =

~0~0~c

T ~y1~y2~z

A−AI

T ~y1~y2~z

≥ ~es

~y1, ~y2, ~z ≥ ~0Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Proof of the theorem(cont.)

Next, let’s rewrite the dual of the max-flow problem.

Dual Problem

Minimizeξ = ~cT~z

AT(~y1 − ~y2) + ~z ≥ ~es

~y1, ~y2, ~z ≥ ~0

If we let ~y = ~y1 − ~y2, then~y becomes anunconstrained variable.

Rewrite the constraints

Minimizeξ = ~cT~z

yv − yu + z(u,v) ≥ 0 if u 6= s, v 6= t

yv + z(u,v) ≥ 1 if u = s, v 6= t

−yu + z(u,v) ≥ 0 if u 6= s, v = t

z(u,v) ≥ 1 if u = s, v = t

~z ≥ ~0~y is free

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Proof of the theorem(cont.)

Fix ys = 1 and yt = 0, and all the above inequalities can bewritten in the same form:

Dual Problem of MaxCut

Minimizeξ = ~cT~z

ys = 1

yt = 0

yv − yu + z(u,v) ≥ 0

~z ≥ ~0~y is free

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Proof of the theorem(cont.)

We will show that the optimal solution of the dual problem(denoted as OPT) equals to the optimal solution of the minimumcut problem (denoted as MIN-CUT).

OPT≤MIN-CUTGiven a minimum cut of the network, let yi = 1 if vertexi ∈ S and yi = 0 otherwise. Let z(u,v) = 1 if the edge (u, v)cross the cut.It is obvious that the constraint yv − yu + z(u,v) ≥ 0 canalways be satisfied, and the value of the objective function isequal to the capacity of the cut.Therefore, MIN-CUT is a feasible solution to the LP dual.

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Proof of the theorem(cont.)

OPT≥MIN-CUTConsider the optimal solution (~y∗, ~z∗). Pick p ∈ (0, 1]uniformly at random and let S = {v ∈ V |y∗v ≥ p}Note that S is a valid cut. Now, for any edge (u, v),

Pr[(u, v) in the cut] = Pr(y∗v < p ≤ y∗u) ≤ z∗(u,v)

∴ E[Capacity of S] =∑

c(u,v)Pr[(u, v) in the cut]

≤∑

c(u,v)z∗(u,v)

=~cT~z∗ = ξ∗

Hence there must be a cut of capacity less than or equal toOPT. The claim follows.

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Game Theory: Basic Model

One of the most elegant application of LP lies in the field of GameTheory.

Strategic Game

A strategic game consists of

A finite set N (the set of players)

For each player i ∈ N a nonempty set Ai (the set ofactions/strategies available to player i)

For each player i ∈ N a preference relation �i onA = ×j∈NAj

The preference relation �i of player i can be represented by autility function ui : A→ R (also called a payoff function), in thesense that ui(a) ≥ ui(b) whenever a �i b

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Game Theory: Basic Model(example)

A finite strategic game of two players can be describedconveniently in a table like this:

Prisoner’s Dilemma

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Game Theory: Zero-Sum Game

If in a game the gain of one player is offset by the loss of anotherplayer, such game is often called a zero-sum game

Rock-Paper-Scissors

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Game Theory: Matrix Game

A finite two-person zero-sum game is also called a matrix game,for it can be represented solely by one matrix.

Rock-Paper-Scissors(matrix notation)

Utilities for the row player is:

U =

0 1 −1−1 0 11 −1 0

Utilities for the column player is simply the negation of U :

V = −U =

0 −1 11 0 −1−1 1 0

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Game Theory: Mixed Strategy

A mixed strategy ~x is a randomization over one player’s purestrategies satisfying ~x ≥ ~0 and ~eT~x = 1.The expected utility is the expectation of utility over all possibleoutcomes.

Expected utility

Let ~x and ~y denote the mixed strategy of column player and rowplayer, respectively. Then the expected utility to the row player is~xTU~y, and the expected utility to the column player is −~xTU~y

Rock-Paper-Scissors(mixed strategy)

If both player choose a mixed strategy of (13 ,13 ,

13), their expected

utility will be 0, which conforms to our intuition.

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Game Theory: Motivation

Question

Suppose the payoffs in Rock-Paper-scissors game are altered:

U =

0 1 −2−3 0 45 −6 0

Who has the edge in this game?

What is the best strategy for each player?

How much can each player expect to win in one round?

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Optimal pure strategy

First let us consider the case of pure strategy.

If the row player select row i, her payoff is at least minj uij .To maximize her profit she would select row i that wouldmake minj uij as large as possible.

If the column player select column j, her payoff is at most−maxi uij . To minimize her loss she would select column jthat would make maxi uij as small as possible.

If ∃i, j such that maximinj uij = minj maxi uij = v, thematrix game is said to have a saddlepoint, where i and jconstitute a Nash Equilibrium.

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Example

Here is a matrix game that contains a saddlepoint:

U =

−1 2 51 8 40 6 3

However, it is very common for a matrix game to have nosaddlepoint at all:

U =

0 1 −2−3 0 45 −6 0

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Optimal mixed strategy

Theorem

Given the (mixed) strategy of the row player, if ~y is the columnplayer’s best response then for each pure strategy i such thatyi > 0, their expected utilitiy must be the same.

Proof

Suppose this were not true. Then

There must be at least one i yields a lower expected utility.

If I drop this strategy i from my mix, my expected utility willbe increased.

But then the original mixed strategy cannot be a bestresponse. Contradiction.

Q.E.D.

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Optimal mixed strategy(cont.)

Observation

For pure strategies, the row player should maxi-minimize herutility uij

For mixed strategies, the row player should maxi-minimize herexpected utility ~xTU~y instead.

MaximinimizationProblem

max~x

min~y~xTU~y

~eT~x = 1

~x ≥ 0

Shift to pure strategy

max~x

mini~xTU~ei

~eT~x = 1

~x ≥ 0

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Optimal mixed strategy(cont.)

Observation

Let v = mini ~xTU~ei, then

∀i = 1, 2, ..., n, v ≤ ~xTU~ei

MaximinimizationProblem

max~x

v

v ≤ ~xTU~ei i = 1, 2, ...,m

~eT~x = 1

~x ≥ 0

Matrix Notation

Maximize

ζ = v

v~e ≤ U~x~eT~x = 1

~x ≥ 0

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Optimal mixed strategy(cont.)

Observation

By symmetry, the column player seeks a mixed strategy thatmini-maximize her expected loss.

MinimaximizationProblem

min~y

max~x

~xTU~y

~eT~y = 1

~y ≥ 0

Matrix Notation

Minimize

ξ = u

u~e ≥ UT~y

~eT~y = 1

~y ≥ 0

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Optimal mixed strategy(cont.)

Observation

The mini-maximization problem is the dual of themaxi-minimization problem.

Primal Problem

Maximize

ζ = v

v~e ≤ U~x~eT~x = 1

~x ≥ 0

Dual Problem

Minimize

ξ = u

u~e ≥ UT~y

~eT~y = 1

~y ≥ 0

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Optimal mixed strategy(cont.)

Observation

The mini-maximization problem is the dual of themaxi-minimization problem.

Primal Problem

Maximize

ζ =[0 1

] [ ~xv

][−U ~e~eT 0

] [~xv

]≤=

[01

]~x ≥ ~0, v is free

Dual Problem

Minimize

ξ =[0 1

] [ ~yu

][−UT ~e~eT 0

] [~yu

]≥=

[01

]~y ≥ ~0, u is free

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Minmax Theorem

von Neumann’s Min-max Theorem

max~x

min~y~xTU~y = min

~ymax~x

~xTU~y

Proof

This theorem is a direct consequence of the Strong DualityTheorem and our previous analysis.

Q.E.D.

Remarks

The common optimal value v∗ = u∗ is called the value of thegame

A game whose value is 0 is called a fair game

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Minmax Theorem: Example

We now solve the modified Rock-Paper-Scissors game.

Problem

Maximize ζ = v

0 −1 2 13 0 −4 1−5 6 0 11 1 1 0

x1x2x3v

≤=

0001

x1, x2, x3 ≥ 0, v is free

Solution

~x∗ =

621022710213102

ζ∗ ≈ 0.15686

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Poker Face

Poker

Tricks in card game

Bluff: increase the bid in an attempt tocoerce the opponent even though lose isinevitable if the challenge is accepted

Underbid: refuse to bid in an attempt to givethe opponent false hope even though biddingis a more profitable choice

Question

Are bluffing and underbidding both justifiedbidding strategies?

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Poker: Model

Our simplified model involves two players, A and B, and a deckhaving three cards 1,2 and 3.

Betting scenarios

A passes, B passes: \$1 to holder of higher card

A passes, B bets, A passes: \$1 to B

A passes, B bets, A bets: \$2 to holder of higher card

A bets, B passes: \$1 to A

A bets, B bets: \$2 to holder of higher card

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Poker: Model(cont.)

Observation

Player A can bet along one of 3 lines while player B can bet alongfour lines.

Betting lines for A

First pass. If B bets,pass again.

First pass. If B bets, bet.

Bet.

Betting lines for B

Pass no matter what

If A passes, pass; if Abets, bet

If A passes, bet; if Abets, pass

Bet no matter what

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Strategy

Pure Strategies

Each player’s pure strategies can be denoted by triples (y1, y2, y3),where yi is the line of betting that the player will use when holdingcard i.

The calculation of expected payment must be carried out forevery combination of pairs of strategies

Player A has 3× 3× 3 = 27 pure strategies

Player B has 4× 4× 4 = 64 pure strategies

There are altogether 27× 64 = 1728 pairs. This number istoo large!

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Strategy(cont.)

Observation 1

When holding a 1

Player A should refrain from betting along line 2

Player B should refrain from betting along line 2 and 4

Observation 2

When holding a 3

Player A should refrain from betting along line 1

Player B should refrain from betting along line 1,2 and 3

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Strategy(cont.)

Observation 3

When holding a 2

Player A should refrain from betting along line 3

Player B should refrain from betting along line 3 and 4

Now player A has only 2× 2× 2 = 8 pure strategies and player Bhas only 2× 2× 1 = 4 pure strategies. We are able to computethe payoff matrix then.

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Payoff

Payoff Matrix

Solution

~x∗ =

12001300016

Jia Chen Introduction to Linear Programming IntroductionThe Simplex Method

Duality TheoryApplications

MaxFlow-MinCut Theoremvon Neumann’s Minmax TheoremPoker

Explanation

Player A’s optimal strategy is as follows:

When holding 1, mix line 1 and 3 in 5:1 proportion

When holding 2, mix line 1 and 2 in 1:1 proportion

When holding 3, mix line 2 and 3 in 1:1 proportion

Note that it is optimal for player A to use line 3 when holding a 1sometimes, and this bet is certainly a bluff.It is also optimal to use line 2 when holding a 3 sometimes, andthis bet is certainly an underbid.

Jia Chen Introduction to Linear Programming