Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d...

42
Allocating Resources, in the Future Sid Banerjee School of ORIE May 3, 2018 Simons Workshop on Mathematical and Computational Challenges in Real-Time Decision Making

Transcript of Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d...

Page 1: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

Allocating Resources, in the Future

Sid Banerjee

School of ORIE

May 3, 2018

Simons Workshop on Mathematical and Computational Challenges in Real-Time Decision Making

Page 2: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

online resource allocation: basic model

𝜃(1) 𝜃(2) 𝜃(t) 𝜃(T)... ...𝜃(3)

B1=3

• single resource, initial capacity B; T agents arrive sequentially

• agent t has type θ(t) = reward earned if agent is allocated

online resource allocation problem

allocate resources to maximize sum of rewards

1/18

Page 3: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

online resource allocation: basic model

𝜃(1) 𝜃(2) 𝜃(t) 𝜃(T)... ...𝜃(3)

B2=3

• single resource, initial capacity B; T agents arrive sequentially

• agent t has type θ(t) = reward earned if agent is allocated

• principle makes irrevocable decisions

online resource allocation problem

allocate resources to maximize sum of rewards

1/18

Page 4: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

online resource allocation: basic model

𝜃(1) 𝜃(2) 𝜃(t) 𝜃(T)... ...𝜃(3)

B3=2

• single resource, initial capacity B; T agents arrive sequentially

• agent t has type θ(t) = reward earned if agent is allocated

• principle makes irrevocable decisions; resource is non-replenishable

online resource allocation problem

allocate resources to maximize sum of rewards

1/18

Page 5: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

online resource allocation: basic model

𝜃(1) 𝜃(2) 𝜃(t) 𝜃(T)... ...𝜃(3)

Bt=1

• single resource, initial capacity B; T agents arrive sequentially

• agent t has type θ(t) = reward earned if agent is allocated

• principle makes irrevocable decisions; resource is non-replenishable

• assumptions on agent types θt• finite set of values vin

i=1 (e.g. θ(t) = vi with prob pi i.i.d.)

• in general: arrivals can be time varying, correlated

online resource allocation problem

allocate resources to maximize sum of rewards

1/18

Page 6: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

online resource allocation: basic model

𝜃(1) 𝜃(2) 𝜃(t) 𝜃(T)... ...𝜃(3)

Bt=1

• single resource, initial capacity B; T agents arrive sequentially

• agent t has type θ(t) = reward earned if agent is allocated

• principle makes irrevocable decisions; resource is non-replenishable

• assumptions on agent types θt• finite set of values vin

i=1 (e.g. θ(t) = vi with prob pi i.i.d.)

• in general: arrivals can be time varying, correlated

online resource allocation problem

allocate resources to maximize sum of rewards 1/18

Page 7: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

online resource allocation: first generalization

𝜃(1) 𝜃(2) 𝜃(t) 𝜃(T)... ...𝜃(3)

𝜃~ (Ai,vi) w.p. pi

𝜃(1)

• d resources, initial capacities (B1,B2, . . . ,Bd )

• T agents; each has type θi = (Ai , vi )

• Ai ∈ 0, 1d : resource requirement, vi : value

• agent has type θi with prob pi

also known as: network revenue management; single-minded buyer

2/18

Page 8: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

online resource allocation: second generalization

𝜃(1) 𝜃(2) 𝜃(t) 𝜃(T)... ...𝜃(3)

𝜃~ (vi1 ,vi2) w.p. pi

𝜃(1)

• d resources, initial capacities (B1,B2, . . . ,Bd )

• T agents arrive sequentially

• each has type θ = (vi1, vi2, . . . , vid ), wants single resource

also known as: online weighted matching; unit-demand buyer

3/18

Page 9: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

online allocation across fields

• related problems studied in Markov decision processes, online

algorithms, prophet inequalities, revenue management, etc.

• informational variants:

distributional knowledge ≺ bandit settings ≺ adversarial inputs

4/18

Page 10: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

the technological zeitgeist

the ‘deep’ learning revolution

vast improvements in machine learning for data-driven prediction

5/18

Page 11: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

axiomatizing the zeitgeist

the deep learning revolution

vast improvements in machine learning for data-driven prediction

• axiom: have access to black-box predictive algorithms

core question of this talk

how does having such an oracle affect online resource allocation?

• TL;DR - new online allocation policies with strong regret bounds

• re-examining old questions leads to surprising new insights

6/18

Page 12: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

axiomatizing the zeitgeist

the deep learning revolution

vast improvements in machine learning for data-driven prediction

• axiom: have access to black-box predictive algorithms

core question of this talk

how does having such an oracle affect online resource allocation?

• TL;DR - new online allocation policies with strong regret bounds

• re-examining old questions leads to surprising new insights

6/18

Page 13: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

bridging online allocation and predictive models

The Bayesian Prophet: A Low-Regret Framework for Online Decision Making

Alberto Vera & S.B. (2018)

https://ssrn.com/abstract_id=31580627/18

Page 14: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

focus of talk: allocation with single-minded agents

𝜃(1) 𝜃(2) 𝜃(t) 𝜃(T)... ...𝜃(3)

𝜃~ (Ai,vi) w.p. pi

𝜃(1)

• d resources, initial capacities (B1,B2, . . . ,Bd )

• T agents arrive sequentially; each has type θ = (A, v)

• A = resource requirement, v = value

• agent has type θi with prob pi , i.i.d.

online allocation problem

allocate resources to maximize sum of rewards

8/18

Page 15: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

performance measure

𝜃(1) 𝜃(2) 𝜃(t) 𝜃(T)... ...𝜃(3)

𝜃~ (Ai,vi) w.p. pi

𝜃(1)

optimal policy

can be computed via dynamic programming

– requires exact distributional knowledge

– ‘curse of dimensionality’: |state-space| = T × B1 × . . .× Bd

– does not quantify cost of uncertainty

‘prophet’ benchmark

V off : OFFLINE optimal policy; has full knowledge of θ1, θ2, . . . , θT

9/18

Page 16: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

performance measure

𝜃(1) 𝜃(2) 𝜃(t) 𝜃(T)... ...𝜃(3)

𝜃~ (Ai,vi) w.p. pi

𝜃(1)

optimal policy

can be computed via dynamic programming

– requires exact distributional knowledge

– ‘curse of dimensionality’: |state-space| = T × B1 × . . .× Bd

– does not quantify cost of uncertainty

‘prophet’ benchmark

V off : OFFLINE optimal policy; has full knowledge of θ1, θ2, . . . , θT9/18

Page 17: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

performance measure: regret

prophet benchmark: V off

• OFFLINE knows entire type sequence θt |t = 1 . . .T• for the network revenue management setting, V off given by

max .n∑

i=1

xivi

s.t.n∑

i=1

Aixi ≤ B

0 ≤ xi ≤ Ni [1 : T ]

– Ni [1 : T ] ∼ # of arrivals of type θi = (Ai , vi ) over 1, 2, . . . ,T

regret

E[Regret] = E[V off − V alg ]

10/18

Page 18: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

online allocation with prediction oracle

given black-box predictive oracle about performance of OFFLINE

(specifically, for any t,B, have statistical info about V off [t,T ])

• let πt = P[V off [t,T ] decreases if OFFLINE accepts tth arrival

]Bayes selector

accept tth arrival iff πt > 0.5

theorem [Vera & B, 2018]

(under mild tail bounds on Ni [t : T ])

Bayes selector has E[Regret] independent of T ,B1,B2, . . . ,Bd

• arrivals can be time-varying, correlated; discounted rewards

• works for general settings (single-minded, unit-demand, etc.)

• can use approx oracle (e.g., from samples)

11/18

Page 19: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

online allocation with prediction oracle

given black-box predictive oracle about performance of OFFLINE

(specifically, for any t,B, have statistical info about V off [t,T ])

• let πt = P[V off [t,T ] decreases if OFFLINE accepts tth arrival

]

Bayes selector

accept tth arrival iff πt > 0.5

theorem [Vera & B, 2018]

(under mild tail bounds on Ni [t : T ])

Bayes selector has E[Regret] independent of T ,B1,B2, . . . ,Bd

• arrivals can be time-varying, correlated; discounted rewards

• works for general settings (single-minded, unit-demand, etc.)

• can use approx oracle (e.g., from samples)

11/18

Page 20: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

online allocation with prediction oracle

given black-box predictive oracle about performance of OFFLINE

(specifically, for any t,B, have statistical info about V off [t,T ])

• let πt = P[V off [t,T ] decreases if OFFLINE accepts tth arrival

]Bayes selector

accept tth arrival iff πt > 0.5

theorem [Vera & B, 2018]

(under mild tail bounds on Ni [t : T ])

Bayes selector has E[Regret] independent of T ,B1,B2, . . . ,Bd

• arrivals can be time-varying, correlated; discounted rewards

• works for general settings (single-minded, unit-demand, etc.)

• can use approx oracle (e.g., from samples)

11/18

Page 21: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

online allocation with prediction oracle

given black-box predictive oracle about performance of OFFLINE

(specifically, for any t,B, have statistical info about V off [t,T ])

• let πt = P[V off [t,T ] decreases if OFFLINE accepts tth arrival

]Bayes selector

accept tth arrival iff πt > 0.5

theorem [Vera & B, 2018]

(under mild tail bounds on Ni [t : T ])

Bayes selector has E[Regret] independent of T ,B1,B2, . . . ,Bd

• arrivals can be time-varying, correlated; discounted rewards

• works for general settings (single-minded, unit-demand, etc.)

• can use approx oracle (e.g., from samples)

11/18

Page 22: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

online allocation with prediction oracle

given black-box predictive oracle about performance of OFFLINE

(specifically, for any t,B, have statistical info about V off [t,T ])

• let πt = P[V off [t,T ] decreases if OFFLINE accepts tth arrival

]Bayes selector

accept tth arrival iff πt > 0.5

theorem [Vera & B, 2018]

(under mild tail bounds on Ni [t : T ])

Bayes selector has E[Regret] independent of T ,B1,B2, . . . ,Bd

• arrivals can be time-varying, correlated; discounted rewards

• works for general settings (single-minded, unit-demand, etc.)

• can use approx oracle (e.g., from samples)

11/18

Page 23: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

standard approach: randomized admission control (RAC)

offline optimum V off

max .n∑

i=1

xivi

s.t.n∑

i=1

Aixi ≤ B

0 ≤ xi ≤ Ni [1 : T ]

(upfront) fluid LP V fl

max .n∑

i=1

xivi

s.t.n∑

i=1

Aixi ≤ B

0 ≤ xi ≤ E[Ni [1 : T ]] = Tpi

– E[V off ] ≤ V fl (via Jensen’s, concavity of V off w.r.t. Ni )

– fluid RAC: accept type θi with prob xi

Tpi

proposition

fluid RAC has E[Regret] = Θ(√T )

– [Gallego & van Ryzin’97], [Maglaras & Meissner’06]

– N.B. this is a static policy!

12/18

Page 24: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

standard approach: randomized admission control (RAC)

offline optimum V off

max .n∑

i=1

xivi

s.t.n∑

i=1

Aixi ≤ B

0 ≤ xi ≤ Ni [1 : T ]

(upfront) fluid LP V fl

max .n∑

i=1

xivi

s.t.n∑

i=1

Aixi ≤ B

0 ≤ xi ≤ E[Ni [1 : T ]] = Tpi

– E[V off ] ≤ V fl (via Jensen’s, concavity of V off w.r.t. Ni )

– fluid RAC: accept type θi with prob xi

Tpi

proposition

fluid RAC has E[Regret] = Θ(√T )

– [Gallego & van Ryzin’97], [Maglaras & Meissner’06]

– N.B. this is a static policy!

12/18

Page 25: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

standard approach: randomized admission control (RAC)

offline optimum V off

max .n∑

i=1

xivi

s.t.n∑

i=1

Aixi ≤ B

0 ≤ xi ≤ Ni [1 : T ]

(upfront) fluid LP V fl

max .n∑

i=1

xivi

s.t.n∑

i=1

Aixi ≤ B

0 ≤ xi ≤ E[Ni [1 : T ]] = Tpi

– E[V off ] ≤ V fl (via Jensen’s, concavity of V off w.r.t. Ni )

– fluid RAC: accept type θi with prob xi

Tpi

proposition

fluid RAC has E[Regret] = Θ(√T )

– [Gallego & van Ryzin’97], [Maglaras & Meissner’06]

– N.B. this is a static policy! 12/18

Page 26: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

RAC with re-solving

offline optimum V off

max .n∑

i=1

xivi

s.t.n∑

i=1

Aixi ≤ B

0 ≤ xi ≤ Ni

re-solved fluid LP V fl (t):

max .n∑

i=1

xi [t]vi

s.t.n∑

i=1

Aixi [t] ≤ B[t]

0 ≤ xi [t] ≤ E[Ni [t : T ]] = (T − t)pi

AC with re-solving: at time t, accept type θi with prob xi [t](T−t)pi

– regret improves to o(√T ) [Reiman & Wang’08]

– O(1) regret under (dual) non-degeneracy [Jasin & Kumar’12]

– most results use V fl as benchmark (including ‘prophet inequality’)

proposition [Vera & B’18]

for degenerate instances, V fl − E[V off ] = Ω(√T )

13/18

Page 27: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

RAC with re-solving

offline optimum V off

max .n∑

i=1

xivi

s.t.n∑

i=1

Aixi ≤ B

0 ≤ xi ≤ Ni

re-solved fluid LP V fl (t):

max .n∑

i=1

xi [t]vi

s.t.n∑

i=1

Aixi [t] ≤ B[t]

0 ≤ xi [t] ≤ E[Ni [t : T ]] = (T − t)pi

AC with re-solving: at time t, accept type θi with prob xi [t](T−t)pi

– regret improves to o(√T ) [Reiman & Wang’08]

– O(1) regret under (dual) non-degeneracy [Jasin & Kumar’12]

– most results use V fl as benchmark (including ‘prophet inequality’)

proposition [Vera & B’18]

for degenerate instances, V fl − E[V off ] = Ω(√T )

13/18

Page 28: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

RAC with re-solving

offline optimum V off

max .n∑

i=1

xivi

s.t.n∑

i=1

Aixi ≤ B

0 ≤ xi ≤ Ni

re-solved fluid LP V fl (t):

max .n∑

i=1

xi [t]vi

s.t.n∑

i=1

Aixi [t] ≤ B[t]

0 ≤ xi [t] ≤ E[Ni [t : T ]] = (T − t)pi

AC with re-solving: at time t, accept type θi with prob xi [t](T−t)pi

– regret improves to o(√T ) [Reiman & Wang’08]

– O(1) regret under (dual) non-degeneracy [Jasin & Kumar’12]

– most results use V fl as benchmark (including ‘prophet inequality’)

proposition [Vera & B’18]

for degenerate instances, V fl − E[V off ] = Ω(√T )

13/18

Page 29: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

Bayes selector for i.i.d arrivals

Bayes selector

πt = P[V off [t,T ] decreases if OFFLINE accepts tth arrival

]– accept tth arrival iff πt > 0.5

re-solved fluid LP

max .n∑

i=1

xi [t]vi

s.t. Ax [t] ≤ B[t],

0 ≤ xi [t] ≤ E[Ni [t : T ]]

a

the re-solved LP gives an

approximate admission oracle

fluid Bayes selector

accept type θi iff xi [t]E[Ni [t:T ]] > 0.5

proposition [Vera & B, 2018]

fluid Bayes selector has E[Regret] ≤ 2vmax

∑ni=1 p

−1i

– proposed for multi-secretary by [Gurvich & Arlotto, 2017]

– NRM via partial resolving [Bumpensanti & Wang, 2018]

14/18

Page 30: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

Bayes selector for i.i.d arrivals

Bayes selector

πt = P[V off [t,T ] decreases if OFFLINE accepts tth arrival

]– accept tth arrival iff πt > 0.5

re-solved fluid LP

max .n∑

i=1

xi [t]vi

s.t. Ax [t] ≤ B[t],

0 ≤ xi [t] ≤ E[Ni [t : T ]]

a

the re-solved LP gives an

approximate admission oracle

fluid Bayes selector

accept type θi iff xi [t]E[Ni [t:T ]] > 0.5

proposition [Vera & B, 2018]

fluid Bayes selector has E[Regret] ≤ 2vmax

∑ni=1 p

−1i

– proposed for multi-secretary by [Gurvich & Arlotto, 2017]

– NRM via partial resolving [Bumpensanti & Wang, 2018]

14/18

Page 31: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

Bayes selector for i.i.d arrivals

Bayes selector

πt = P[V off [t,T ] decreases if OFFLINE accepts tth arrival

]– accept tth arrival iff πt > 0.5

re-solved fluid LP

max .n∑

i=1

xi [t]vi

s.t. Ax [t] ≤ B[t],

0 ≤ xi [t] ≤ E[Ni [t : T ]]

a

the re-solved LP gives an

approximate admission oracle

fluid Bayes selector

accept type θi iff xi [t]E[Ni [t:T ]] > 0.5

proposition [Vera & B, 2018]

fluid Bayes selector has E[Regret] ≤ 2vmax

∑ni=1 p

−1i

– proposed for multi-secretary by [Gurvich & Arlotto, 2017]

– NRM via partial resolving [Bumpensanti & Wang, 2018]

14/18

Page 32: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

Bayes selector for i.i.d arrivals

Bayes selector

πt = P[V off [t,T ] decreases if OFFLINE accepts tth arrival

]– accept tth arrival iff πt > 0.5

re-solved fluid LP

max .n∑

i=1

xi [t]vi

s.t. Ax [t] ≤ B[t],

0 ≤ xi [t] ≤ E[Ni [t : T ]]

a

the re-solved LP gives an

approximate admission oracle

fluid Bayes selector

accept type θi iff xi [t]E[Ni [t:T ]] > 0.5

proposition [Vera & B, 2018]

fluid Bayes selector has E[Regret] ≤ 2vmax

∑ni=1 p

−1i

– proposed for multi-secretary by [Gurvich & Arlotto, 2017]

– NRM via partial resolving [Bumpensanti & Wang, 2018]

14/18

Page 33: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

Bayes selector for i.i.d arrivals

Bayes selector

πt = P[V off [t,T ] decreases if OFFLINE accepts tth arrival

]– accept tth arrival iff πt > 0.5

re-solved fluid LP

max .n∑

i=1

xi [t]vi

s.t. Ax [t] ≤ B[t],

0 ≤ xi [t] ≤ E[Ni [t : T ]]

a

the re-solved LP gives an

approximate admission oracle

fluid Bayes selector

accept type θi iff xi [t]E[Ni [t:T ]] > 0.5

proposition [Vera & B, 2018]

fluid Bayes selector has E[Regret] ≤ 2vmax

∑ni=1 p

−1i

– proposed for multi-secretary by [Gurvich & Arlotto, 2017]

– NRM via partial resolving [Bumpensanti & Wang, 2018]

14/18

Page 34: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

proof outline

the proof comprises two parts

1. compensated coupling: regret bound for Bayes selector for generic

online decision problem

2. bound compensation for online packing problems via LP sensitivity,

measure concentration

15/18

Page 35: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

the compensated coupling: make OFFLINE follow ONLINE

for any time t, budget B[t]

• let V off (t,B[t]) , OFFLINE starting from current state

• for any action a, disagreement set Qt(a) , set of sample-paths ω

where a is sub-optimal (given B[t])

• can compensate OFFLINE to follow same action a as ONLINE

V off (t,B[t]) ≤ Ralgt + vmax1ω∈Qt(a) + V off (t + 1,B[t + 1])

• iterating, we get

E[V off ] ≤ E[V alg ] + vmax

T∑t=1

P[Qt(at)]

note: Bayes selector picks at = mina P[Qt(at)]

16/18

Page 36: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

the compensated coupling: make OFFLINE follow ONLINE

for any time t, budget B[t]

• let V off (t,B[t]) , OFFLINE starting from current state

• for any action a, disagreement set Qt(a) , set of sample-paths ω

where a is sub-optimal (given B[t])

• can compensate OFFLINE to follow same action a as ONLINE

V off (t,B[t]) ≤ Ralgt + vmax1ω∈Qt(a) + V off (t + 1,B[t + 1])

• iterating, we get

E[V off ] ≤ E[V alg ] + vmax

T∑t=1

P[Qt(at)]

note: Bayes selector picks at = mina P[Qt(at)]

16/18

Page 37: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

the compensated coupling: make OFFLINE follow ONLINE

for any time t, budget B[t]

• let V off (t,B[t]) , OFFLINE starting from current state

• for any action a, disagreement set Qt(a) , set of sample-paths ω

where a is sub-optimal (given B[t])

• can compensate OFFLINE to follow same action a as ONLINE

V off (t,B[t]) ≤ Ralgt + vmax1ω∈Qt(a) + V off (t + 1,B[t + 1])

• iterating, we get

E[V off ] ≤ E[V alg ] + vmax

T∑t=1

P[Qt(at)]

note: Bayes selector picks at = mina P[Qt(at)]

16/18

Page 38: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

the compensated coupling: make OFFLINE follow ONLINE

for any time t, budget B[t]

• let V off (t,B[t]) , OFFLINE starting from current state

• for any action a, disagreement set Qt(a) , set of sample-paths ω

where a is sub-optimal (given B[t])

• can compensate OFFLINE to follow same action a as ONLINE

V off (t,B[t]) ≤ Ralgt + vmax1ω∈Qt(a) + V off (t + 1,B[t + 1])

• iterating, we get

E[V off ] ≤ E[V alg ] + vmax

T∑t=1

P[Qt(at)]

note: Bayes selector picks at = mina P[Qt(at)]

16/18

Page 39: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

compensated coupling for single resource allocation

for any time t, budget B[t]

• if Bayes selector rejects type θi , assume OFFLINE front-loads θi

– error only if OFFLINE rejects all future θi

• if Bayes selector accepts type θi , assume OFFLINE back-loads θi

– error only if OFFLINE accepts all future θi

• claim: smaller of the two events has probability e−c(T−t)

17/18

Page 40: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

compensated coupling for single resource allocation

for any time t, budget B[t]

• if Bayes selector rejects type θi , assume OFFLINE front-loads θi

– error only if OFFLINE rejects all future θi

• if Bayes selector accepts type θi , assume OFFLINE back-loads θi

– error only if OFFLINE accepts all future θi

• claim: smaller of the two events has probability e−c(T−t)

17/18

Page 41: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

summary

online allocation via the Bayes selector

• new online allocation policy with horizon-independent regret

• way to use black-box predictive algorithms

• generic regret bounds for any online decision problem

18/18

Page 42: Allocating Resources, in the Future...{ ‘curse of dimensionality’: jstate-spacej= T B 1::: B d {does not quantify cost of uncertainty ‘prophet’ benchmark Vo : OFFLINE optimal

Thanks!

18/18