Characterizing Coherence, Correcting Incoherence · 2020. 5. 21. · Characterizing Coherence,...

1
Characterizing Coherence, Correcting Incoherence I WANT YOU to crank out COHERENCE CHARACTERIZATIONS 1. Context Basic setup: • Finite possibility space Ω • Finite set of gambles on Ω Lower previsions P on Matrix notation: Ω -by- matrix K with gambles as columns • the rows of K (columns of K ) are the degenerate previsions the set of matrices S obtained from the identity matrix I by changing at most one 1 to -1 • all-one (zero) column vector 1 (0) 2. Goals Given K , find a non-redundant H- representations for the set of all P A. that avoid sure loss ( Λ A α A ), B. that avoid sure loss and for which P min ( Λ B α B ), C. that are coherent ( Λ C α C ). 7. Experiments The sparsity σ is the fraction of zero components in K . Procedure C1 is exponential in 1 - σ and linear in Ω : 0 .1 .2 .3 .4 .5 .6 .7 .8 .91 10 -2 10 -1 10 0 10 1 10 2 Ω = 4 8 16 32 64 128 256 512 1024 2048 4096 Ω = 8192 σ [s] = 5 . . . and (at least) exponential in : 0 .1 .2 .3 .4 .5 .6 .7 .8 .91 10 -3 10 -2 10 -1 10 0 10 1 10 2 10 3 = 3 = 4 = 6 = 8 = 9 = 12 σ [s] Ω = 6 3. Goal A: Characterizing ASL Based on the existence of a dominating linear prevision: A1. μ I , ν I 0 P = K μ I - Iν I 1 μ I = 1 K -I 1 0 Λ A α A EN, RR A2. μ I 0 P K μ I 1 μ I = 1 I -K 0 -I 0 1 1 -1 -1 Λ A α A PJ P , RR 4. Goal B: Characterizing ASL min B1. Starting from Λ A α A : Λ A α A -I - min Λ B α B RR 5. Goal C: Characterizing coherence Based on the existence of S-dominating linear previsions: C1. Analogous to A1 & intersection over all S in : S ∈∶∃ μ S , ν S 0 P = K μ S - Sν S 1 μ S = 1 K -S 1 0 Λ C α C EN, IS S, RR C2. Analogous to A2 & intersection over all S in : S ∈∶∃ μ S 0 SP SK μ S 1 μ S = 1 S -SK 0 -I 0 1 1 -1 -1 Λ C α C PJ P , IS S, RR =∶ A S,P A S, μ S b 0 C3. Block matrix form of C2: A P A μ b ∶= A I,P A I, μ I b 0 A S,P A S, μ S b 0 Λ C α C PJ P , RR 6. Illustrations of Procedure C1 P g 1 0 1 2 1 P g 2 0 1 2 1 P b P a P c Ω ={a, b, c} K = 10 1 2 1 0 1 2 1 1 -1 1 1 -1 P b P a P c min facet enumerate the V-representa- tion for avoiding sure S-loss for each S in intersect and remove redundancy P g 1 P g 2 P g 3 P b P a P c Ω ={a, b, c} K = 10 1 2 1 2 10 0 1 2 1 min P b P a P c intersect and remove redundancy I WANT YOU to ERADICATE INCOHERENCE utterly 1. Context & Goal Given: incoherent lower prevision P . Goal: Find a coherent correction to it. 2. Bring within bounds If P f min f , max f for some f in , it is out of bounds. To bring it within bounds: B P f ∶= min f P f min f , max f P f max f , P f otherwise. P B P Q B Q lower previsions out of bounds 3. Downward correction As the downward correction of P we take the lower envelope of the maximal coherent dominated lower previsions (proposed earlier by Pelessoni & Vicig, following Weichselberger), so the nadir point D P of the MOLP (cf. C) (†) maximize Q , subject to Λ C Q α C Q P or the MOLP (cf. C3) (‡) maximize Q , subject to A Q Q + A μ μ b Q P . Some desirable properties: It is the maximal neutral correction (‘no component tradeoffs’). The imprecision of the correction is nondecreasing with incoherence. P D P Q D Q D P P dominated lower previsions extreme coherent dominated lower prevision For the future: Can the computation be simplified for special classes of P ? 4. Experiments With the M3-solver we used, computa- tion appears exponential in ; using pre-computed constraints (†) is more efficient than not (‡): 2 3 4 5 6 7 8 9 10 10 -2 10 -1 10 0 10 1 10 2 10 3 10 4 3 8 17 24 39 53 112 228 247 2 1 2 1 4 1 4 1 17 1 17 1 24 3 24 3 3 2 8 2 7 2 29 2 8 3 26 5 206 2 206 2 16 16 16 16 [s] D P via (†) D P via (‡) Ω = 5, σ 12 We expect other solvers and certainly direct M2-solvers to perform more efficiently, but could not test any yet. 5. Upward correction The standard upward correction of P is its natural extension E P , the unique minimal pointwise dominating co- herent lower prevision, so the the solution to the MOLP (cf. C) minimize E P , subject to Λ C E P α C E P P or the MOLP (cf. C3) (*) minimize E P , subject to A E P E P + A μ μ b E P P . The problem becomes a plain LP by using the objective gE P g. • (*) decomposes into a classical for- mulation of natural extension. P E P Q dominating lower previsions no natural extension in case of sure loss 1. Representations Any convex polyhedron in R n can be described in two ways: H-representation (intersection of half-spaces) A b ∶= {x R n Ax b} constraint matrix in R k×n constraint vector in R k V-representation (convex hull of points and rays) V w ∶= {x R n x = V μ μ 0 w μ = 1} vector matrix in R n×vector in R vector in (R ) 0 with components defining points (0) and rays (= 0) 2. Illustration Here n = 2, k = 3, and = 4. constraint redundant constraint redundant point extreme ray vertex I WANT YOU to juggle POLYHEDRA like there’s no tomorrow 3. Tasks RR. Removing redundancy: if j is the number of non-redundant con- straints (or vectors), this requires solving k (or ) linear programming problems of size n × j EN. Moving between H- and V-represent- ations: done using vertex/facet enu- meration algorithms; polynomial in n, k, and . PJ. Projection on a lower-dimensional space: easy with V-representations, hard with H-representations. IS. Intersection: easy with H-represent- ations, hard with V-representations. 1. Formalization Any multi-objective linear program (MOLP) can be put in the following form: maximize y = Cx, subject to Ax b and x 0 objective vector in R m objective matrix in R m×n optimization vector in R n constraint matrix in R k×n constraint vector in R k 3. Tasks Main computational tasks in non- decreasing order of complexity: M1. Finding ˆ y. M2. Finding ˇ y. M3. Finding ext * and characterizing * . M4. Finding ext * . M5. Characterizing * . 2. Illustration Here m = n = 2 and k = 4. x 1 x 2 * C 1 C 2 y 1 y 2 * ˆ y ˇ y feasible optimization vectors {x R n Ax b x 0} C-undominated optimization vectors {x ∈ ∶ (∀z ∈∶ Cx < Cz)} with vertices ext * undominated objective vectors { Cx x * } with vertices ext * ideal point, with ˆ y i = max{y i y ∈} nadir point, with ˇ y i = min{y i y * } feasible objective vectors { Cx x ∈} I WANT YOU to grok MULTI-OBJECTIVE LINEAR PROGRAMMING SYSTeMS Research Group Ghent University Erik Quaeghebeur Decision Support Systems Group Utrecht University

Transcript of Characterizing Coherence, Correcting Incoherence · 2020. 5. 21. · Characterizing Coherence,...

Page 1: Characterizing Coherence, Correcting Incoherence · 2020. 5. 21. · Characterizing Coherence, Correcting Incoherence I WANT YOU to crank out COHERENCE CHARACTERIZATIONS 1. Context

Characterizing Coherence, Correcting IncoherenceI WANT YOU

to crank out COHERENCECHARACTERIZATIONS

1. ContextBasic setup:• Finite possibility space Ω

• Finite set of gambles 𝒦 on Ω

• Lower previsions P on 𝒦Matrix notation:• ⋃Ω ⋃-by-⋃𝒦⋃ matrix K with

gambles as columns• the rows of K (columns of K⊺)

are the degenerate previsions• the set 𝒮 of matrices S obtained

from the identity matrix I bychanging at most one 1 to −1

• all-one (zero) column vector 1 (0)

2. GoalsGiven K, find a non-redundant H-representations for the set of all PA. that avoid sure loss ()ΛA αA⌈),B. that avoid sure loss and for

which P ≥min ()ΛB αB⌈),C. that are coherent ()ΛC αC⌈).

7. ExperimentsThe sparsity σ is the fraction ofzero components in K.

Procedure C1 is exponentialin 1 − σ and ∼linear in ⋃Ω ⋃:

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1

10−2

10−1

100

101

102

⋃Ω ⋃ = 4

8

1632

64 128

256512

10242048 4096

⋃Ω ⋃ = 8192

σ

[s]

⋃𝒦⋃ = 5

. . . and (at least) exponential in ⋃𝒦⋃:

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 110−310−210−1100101102103

⋃𝒦⋃ = 3

⋃𝒦⋃ = 4⋃𝒦⋃ = 6

⋃𝒦⋃ = 8⋃𝒦⋃ = 9

⋃𝒦⋃ = 12

σ

[s]

⋃Ω ⋃ = 6

3. Goal A: Characterizing ASLBased on the existence of a dominating linear prevision:

A1. ∃µI,νI ≥ 0 ∶P =K⊺

µI− IνI ∧ 1⊺µI = 1⌊K⊺ −I

1⊺ 0⊺ )ΛA αA⌈EN, RR

A2. ∃µI ≥ 0 ∶P ≤K⊺

µI ∧ 1⊺µI = 1

⎨⎝⎝⎝⎝⎝⎝⎝⎪

I −K⊺ 0−I 01⊺ 1−1⊺ −1

⎬⎠⎠⎠⎠⎠⎠⎠⎮

)ΛA αA⌈PJP, RR

4. Goal B: Characterizing ASL ≥≥≥ minB1. Starting from )ΛA αA⌈: ⌊ ΛA αA

−I −min )ΛB αB⌈RR

5. Goal C: Characterizing coherenceBased on the existence of S-dominating linear previsions:

C1. Analogous to A1 & intersection over all S in 𝒮:

∀S ∈ 𝒮 ∶ ∃µS,νS ≥ 0 ∶P =K⊺

µS−SνS ∧ 1⊺µS = 1⌊K⊺ −S

1⊺ 0⊺ )ΛC αC⌈EN, ISS∈𝒮, RR

C2. Analogous to A2 & intersection over all S in 𝒮:

∀S ∈ 𝒮 ∶ ∃µS ≥ 0 ∶SP ≤ SK⊺

µS ∧ 1⊺µS = 1

⎨⎝⎝⎝⎝⎝⎝⎝⎪

S −SK⊺ 0−I 01⊺ 1

−1⊺ −1

⎬⎠⎠⎠⎠⎠⎠⎠⎮

)ΛC αC⌈PJP, ISS∈𝒮, RR

=∶ )AS,P AS,µS b0⌈C3. Block matrix form of C2:

)AP Aµ b⌈ ∶=

⎨⎝⎝⎝⎝⎝⎝⎝⎪

AI,P AI,µI b0⋮ ⋱ ⋮

AS,P AS,µS b0⋮ ⋱ ⋮

⎬⎠⎠⎠⎠⎠⎠⎠⎮

)ΛC αC⌈PJP, RR

6. Illustrations of Procedure C1

Pg10 12

1

Pg2

0

12

1Pb

Pa

Pc

Ω = a,b,c

K =⎨⎝⎝⎝⎝⎝⎪

1 012 10 1

2

⎬⎠⎠⎠⎠⎠⎮

(11⌋

(−11⌋

(1−1⌋

Pb

Pa

Pc

min

facet enumeratethe V-representa-tion for avoidingsure S-loss foreach S in 𝒮

intersectand removeredundancy

Pg1

Pg2

Pg3

Pb

Pa

Pc

Ω = a,b,c

K =⎨⎝⎝⎝⎝⎝⎪

1 0 12

12 1 00 1

2 1

⎬⎠⎠⎠⎠⎠⎮

min

Pb

Pa

Pc

intersectand removeredundancy

I WANT YOU

to ERADICATEINCOHERENCE utterly

1. Context & GoalGiven: incoherent lower prevision P.Goal: Find a coherent correction to it.

2. Bring within boundsIf P f ∉ (min f ,max f ⌋ for some f in 𝒦, it isout of bounds. To bring it within bounds:

BP f ∶=

)⌉⌉⌉⌉⌉⌉⌋⌉⌉⌉⌉⌉⌉]

min f P f ≤min f ,max f P f ≥max f ,P f otherwise.

P

BP QBQ

lower previsionsout of bounds

3. Downward correctionAs the downward correction of P wetake the lower envelope of the maximalcoherent dominated lower previsions(proposed earlier by Pelessoni & Vicig, following

Weichselberger), so the nadir point DP ofthe MOLP (cf. C)

(†)

maximize Q,

subject to ΛCQ ≤ αC

Q ≤ Por the MOLP (cf. C3)

(‡)

maximize Q,

subject to AQQ+Aµµ ≤ bQ ≤ P.

Some desirable properties:• It is the maximal neutral correction

(‘no component tradeoffs’).• The imprecision of the correction is

nondecreasing with incoherence.P

DP

Q

DQ

DP

P

dominatedlower previsions

extremecoherent dominated

lower prevision

For the future: Can the computation besimplified for special classes of P?

4. ExperimentsWith the M3-solver we used, computa-tion appears exponential in ⋃𝒦⋃; usingpre-computed constraints (†) is moreefficient than not (‡):

2 3 4 5 6 7 8 9 10 10−2

10−1

100

101

102

103

1043 8 17 24 39 53 112 228 247

2

1

2

1

4

1

4

1

17

1

17

1

24

3

24

3

3

2

8

2

7

2

29

2

8

3

26

5

206

2

206

2

16

16

16

16

⋃𝒦⋃

[s]

DP via (†)

DP via (‡)

⋃Ω ⋃ = 5, σ ≈ 1⇑2

We expect other solvers and certainlydirect M2-solvers to perform moreefficiently, but could not test any yet.

5. Upward correctionThe standard upward correction of Pis its natural extension EP, the uniqueminimal pointwise dominating co-herent lower prevision, so the thesolution to the MOLP (cf. C)

minimize EP,

subject to ΛCEP ≤ αC

EP ≥ Por the MOLP (cf. C3)

(*)

minimize EP,

subject to AEPEP+Aµµ ≤ bEP ≥ P.

• The problem becomes a plain LP byusing the objective ∑g∈𝒦EPg.

• (*) decomposes into a classical for-mulation of natural extension.

PEP Q

dominating lower previsions

no natural extensionin case of sure loss

1. RepresentationsAny convex polyhedron in Rn can bedescribed in two ways:H-representation (intersection of half-spaces)

)A b⌈ ∶= x ∈Rn ∶ Ax ≤ bconstraint matrix in Rk×n

constraint vector in Rk

V-representation (convex hull of points and rays)

⌊Vw ∶= x ∈Rn ∶ x =V µ ∧ µ ≥ 0 ∧ w⊺µ = 1

vector matrix in Rn×`vector in R`

vector in (R`)≥0 with components definingpoints (≠ 0) and rays (= 0)

2. IllustrationHere n = 2, k = 3, and ` = 4.

constraint

redundantconstraint

redundant point

extremeray

vertex

I WANT YOU

to juggle POLYHEDRAlike there’s no tomorrow

3. TasksRR. Removing redundancy: if j is the

numberof non-redundant con-straints (or vectors), this requiressolving k (or `) linear programmingproblems of size n× j

EN. Moving between H- and V-represent-ations: done using vertex/facet enu-meration algorithms; polynomial inn, k, and `.

PJ. Projection on a lower-dimensionalspace: easy with V-representations,hard with H-representations.

IS. Intersection: easy with H-represent-ations, hard with V-representations.

1. FormalizationAny multi-objective linear program(MOLP) can be put in the followingform:

maximize y =Cx,subject to Ax ≤ b and x ≥ 0

objectivevector in Rm

objectivematrix in Rm×n

optimizationvector in Rn

constraintmatrix in Rk×n

constraintvector in Rk

3. TasksMain computational tasks in non-decreasing order of complexity:M1. Finding y.M2. Finding y.M3. Finding ext𝒴∗

and characterizing 𝒴∗.M4. Finding ext𝒳 ∗.M5. Characterizing 𝒳 ∗.

2. IllustrationHere m = n = 2 and k = 4.

x1

x2

𝒳

𝒳 ∗

C1

C2

y1

y2

𝒴

𝒴∗y

y

feasible optimization vectorsx ∈Rn ∶ Ax ≤ b ∧ x ≥ 0

C-undominated optimizationvectors x ∈𝒳 ∶ (∀z ∈𝒳 ∶Cx ⇑<Cz)with vertices ext𝒳 ∗

undominated objective vectorsCx ∶ x ∈𝒳 ∗ with vertices ext𝒴∗

ideal point, with yi =maxyi ∶ y ∈ 𝒴

nadir point, withyi =minyi ∶ y ∈ 𝒴∗

feasible objective vectors Cx ∶ x ∈𝒳

I WANT YOU

to grok MULTI-OBJECTIVELINEAR PROGRAMMING

SYSTeMS Research GroupGhent University Erik Quaeghebeur Decision Support Systems Group

Utrecht University