Distributed Stochastic Optimization via Correlated Scheduling

download Distributed Stochastic Optimization  via Correlated Scheduling

of 27

  • date post

    22-Feb-2016
  • Category

    Documents

  • view

    33
  • download

    0

Embed Size (px)

description

Distributed Stochastic Optimization via Correlated Scheduling. 1. Fusion Center. Observation ω 1 (t). 1. Observation ω 2 (t). 2. Michael J. Neely University of Southern California http://www-bcf.usc.edu/~mjneely. Distributed sensor reports. 2. ω 1 (t). 1. Fusion Center. - PowerPoint PPT Presentation

Transcript of Distributed Stochastic Optimization via Correlated Scheduling

PowerPoint Presentation

Distributed Stochastic Optimization via Correlated SchedulingMichael J. NeelyUniversity of Southern Californiahttp://www-bcf.usc.edu/~mjneely

12Fusion CenterObservation 1(t)Observation 2(t)1The title is Distributed stochastic optimization via correlated scheduling. Mike Neely regrets not being able to be here. He had unexpected passport difficulties. Xiaojun Lin will present the talk in his place. 1Distributed sensor reportsi(t) = 0/1 if sensor i observes the event on slot tPi(t) = 0/1 if sensor i reports on slot tUtility: U(t) = min[P1(t)1(t) + (1/2)P2(t)2(t),1]

12Fusion Center1(t)2(t)2Redundant reports do not increase utility.This paper looks at a problem of many distributed sensor devices reporting observations to a fusion center.

The system operates over time slots t = {0, 1, 2, }."Events" may or may not occur every slot. Sensors may or may not observe the events. Sensors use power to report their observations to a fusion center.

This slide illustrates a simple 2-sensor example: Every slot an event may or may not occur (independent over slots). w_i(t) = observation variable for sensor i on slot t.w_i(t) = 0 if sensor i did not observe an event. w_i(t) = 1 if sensor I did observe an event.

p_i(t) = reporting power variable on slot t for sensor i: p_i(t)=0 means sensor i does not report.p_i(t)=1 means sensor i reports and uses 1 unit of power. 2Distributed sensor reportsi(t) = 0/1 if sensor i observes the event on slot tPi(t) = 0/1 if sensor i reports on slot tUtility: U(t) = min[P1(t)1(t) + (1/2)P2(t)2(t),1] 12Fusion CenterMaximize: U

Subject to: P1 c

P2 c1(t)2(t)3The fusion center gets utility u(t). u(t) is a general, possibly non-separable, function of the reports of each sensor.

An example function is shown here. In this example, the fusion center trusts sensor 1 more than sensor 2: U(t) = 1 if sensor 1 observes an event and reports (no additional increase if sensor 2 redundantly reports).U(t) = if sensor 2 observes an event and reports, without sensor 1 reporting.U(t)=0 if nobody reports anything.

Main idea: REDUNDANT REPORTS DO NOT BRING ADDED UTILITY.

The reporting decisions occur every slot. The goal is to design a distributed reporting algorithm to maximize time average utility subject to time average power constraints at each sensor. 3Main ideas for this example4Utility function is non-separable.Redundant reports do not bring extra utility.A centralized algorithm would never send redundant reports (it wastes power).A distributed algorithm faces these challenges: Sensor 2 does not know if sensor 1 observed an event.Sensor 2 does not know if sensor 1 reported anything.

4Assumed structureAgreeon plan0123t45Coordinate on a plan before time 0.

Distributively implement plan after time 0.How do we define optimality over the class of all distributed strategies? Lets assume the following structure: The sensors can coordinate on a plan before time 0.However, after time 0, they must distributively implement this plan. 5Example plansAgreeon plan0123t4Example plan: Sensor 1: t=even Do not report.t=odd Report if 1(t)=1.Sensor 2: t=even Report with prob p if 2(t)=1 t=odd: Do not report.6Here is a plan that ensures no redundant reports:

6Common source of randomnessExample: 1 slot = 1 dayEach person looks at Boston Globe every day:If first letter is a T Plan 1If first letter is an S Plan 2Etc.

Day 1Day 27Here is another type of Plan that uses an important concept: A common source of randomness. I illustrate the idea with a simple story: Two spies are in Boston. They want to coordinate, but they cannot directly talk. They both have access to the Boston globe newspaper. They use the Boston Globe newspaper as a common source of randomness: If the first letter is T, carry out plan 1.If the first letter is S, carryout plan 2.

Thus, the actions are correlate through a common source of randomness. It turns out that using a common source of randomness is fundamental: COMMON RANDOMNESS IS CRUCIAL TO ACHIEVE OPTIMALITYFOR THE DISTRIBUTED REPORTING PROBLEM.7Specific exampleAssume:Pr[1(t)=1] = , Pr[2(t)=1] = 1(t), 2(t) independentPower constraint c = 1/3

Approach 1: Independent reportingIf 1(t)=1, sensor 1 reports with probability 1If 2(t)=1, sensor 2 reports with probability 2

Optimizing 1, 2 gives u = 4/9 0.444448Here is a specific 2-sensor example with specific numbers. It compares independent reporting (without a common source of randomness) to correlated reporting (with a common source of randomness).

Specific probabilities for the observations w_i(t) are given here. The power constraint at each sensor is 1/3.

Approach 1: Independent reporting. -If w_1(t) =1, sensor 1 reports with probability theta_1-If w_2(t) = 1, sensor 2 reports with probability theta_2

8Approach 2: Correlated reportingPure strategy 1: Sensor 1 reports if and only if 1(t)=1.Sensor 2 does not report.

Pure strategy 2: Sensor 1 does not report.Sensor 2 reports if and only if 2(t)=1.

Pure strategy 3:Sensor 1 reports if and only if 1(t)=1.Sensor 2 reports if and only if 2(t)=1.9To introduce correlated reporting, let us first define 3 "pure strategies"

9Approach 2: Correlated reportingX(t) = iid random variable (commonly known): Pr[X(t)=1] = 1Pr[X(t)=2] = 2Pr[X(t)=3] = 3

On slot t: Sensors observe X(t)If X(t)=k, sensors use pure strategy k. Optimizing 1, 2, 3 gives u = 23/48 0.4791710Now let X(t) be a common source of randomness on each slot t (such as a psuedo-random sequence pre-installed at each sensor). Suppose X(t) has iid outcomes in the set {1, 2, 3}, with probabilities theta_1, theta_2, theta_3.

STRATEGY: On each slot t, sensors observe X(t). The use pure strategy k on slot t if and ony if X(t)=k.

Optimizing theta_1, theat_2, theta_3 for this class gives u_bar = 0.47917. THIS CLASS OF CORRELATED REPORTING STRATEGIES OUTPERFORMS INDEPENDENT REPORTING.10Summary of approachesIndependent reporting

Correlated reporting

Centralized reporting0.47917

0.44444

0.5

Strategyu11Summary for this simple example: 1) We saw that independent reporting gives utilty = 0.44444.2) We saw that correlated reporting gives utility = 0.479173) It can be shown that the optimal centralized algorithm gives utility = 0.5.

The question remains: Is the correlated reporting algorithm optimal, or can we do betterWith some other distributed algorithm? 11Summary of approachesIndependent reporting

Correlated reporting

Centralized reporting0.47917

0.44444

0.5

StrategyuIt can be shown that this is optimal over all distributed strategies!12It can be shown that this correlated reporting algorthm is opitmal over all possible distributed strategies. Thus, there is a fundamental perfomance gap between the best distributed algorithm and the bestCentralized algorithm.

12General distributed optimizationMaximize: U

Subject to: Pk c for k in {1, , K} (t) = (1(t), , (t))() = Pr[(t) = (1, , )](t) = (1(t), , (t))U(t) = u((t), (t))Pk(t) = pk((t), (t))13That was a simple 2-sensor example with binary observations and binary reporting decisions. The general problem is this:

Maximize average utility.Subject to average power constraint at each sensor.(w_1(t), , w_N(t)) = observation vector for slot t (iid over slots, possibly correlated entries on each slot).pi(w) = known probabilities for these observation vectors. (a_1(t), , a_N(t)) = action vector for slot t. Utility U(t) and power P_k(t) are general functions of the observation vector w(t) and the action vector a(t). 13Pure strategiesA pure strategy is a deterministic vector-valued function:

g() = (g1(1), g2(2), , g ())

Let M = # pure strategies:

M = |A1||1| x |A2||2| x ... x |AN||N| 14

14Optimality TheoremThere exist:K+1 pure strategies g(m)()Probabilities 1, 2, , K+1such that the following distributed algorithm is optimal:

X(t) = iid, Pr[X(t)=m] = mEach user observes X(t)If X(t)=m use strategy g(m)().15

15LP and complexity reductionThe probabilities can be found by an LP

Unfortunately, the LP has M variables

If (1(t), , (t)) are mutually independent and the utility function satisfies a preferred action property, complexity can be reduced

Example N=2 users, |A1|=|A2|=2

--Old complexity = 2|1|+|2|--New complexity = (|1|+1)(|2|+1)16

16Discussion of Theorem 1Theorem 1 solves the problem for distributed scheduling, but:Requires an offline LP to be solved before time 0.

Requires full knowledge of () probabilities.

17

17Online Dynamic ApproachWe want an algorithm that: Operates onlineDoes not need () probabilities. Can adapt when these probabilities change.18Such an algorithm must use feedback: Assume feedback is a fixed delay D.Assume D>1. Such feedback cannot improve average utility beyond the distributed optimum.

18Lyapunov optimization approachDefine K virtual queues Q1(t), , QK(t).

Every slot t, observe queues and choose strategy m in {1, , M} to maximize a weighted sum of queues.

Update queues with delayed feedback:

Qk(t+1) = max[Qk(t) + Pk(t-D) - c, 0]

19

19Lyapunov optimization approachDefine K virtual queues Q1(t), , QK(t).

Every slot t, observe queues and choose strategy m in {1, , M} to maximize a weighted sum of queues.

Update queues with delayed feedback:

Qk(t+1) = max[Qk(t) + Pk(t-D) - c, 0]

20Virtual queue: If stable, then: Time average power c.arrivalsservice

20Separable problemsIf the utility and penalty functions are a separable su