Energy-Aware Wireless Scheduling with Near Optimal Backlog and Convergence Time Tradeoffs Michael J....

Click here to load reader

  • date post

    14-Dec-2015
  • Category

    Documents

  • view

    212
  • download

    0

Embed Size (px)

Transcript of Energy-Aware Wireless Scheduling with Near Optimal Backlog and Convergence Time Tradeoffs Michael J....

  • Slide 1

Energy-Aware Wireless Scheduling with Near Optimal Backlog and Convergence Time Tradeoffs Michael J. Neely University of Southern California INFOCOM 2015, Hong Kong http://www-bcf.usc.edu/~mjneely A(t) Q(t) (t) Slide 2 A(t) Q(t) (t) Q(t+1) = max[Q(t) + A(t) (t), 0] A Single Wireless Link Slide 3 A(t) Q(t) (t) Q(t+1) = max[Q(t) + A(t) (t), 0] A Single Wireless Link Uncontrolled: A(t) = random arrivals, Slide 4 A(t) Q(t) (t) Q(t+1) = max[Q(t) + A(t) (t), 0] A Single Wireless Link Uncontrolled: A(t) = random arrivals, Controlled: (t) = bits served [depends on power use & channel state] Slide 5 Random Channel States (t) t (t) Observe (t) on slot t (t) in {0, 1, 2, , M } (t) ~ i.i.d. over slots ( k ) = Pr[(t) = k ] Probabilities are unknown Slide 6 Opportunistic Power Allocation p(t) = power decision on slot t [based on observation of (t)] Assume: p(t) in {0, 1} (on or off) (t) = p(t)(t) Time average expectations: p(t) = (1/t) E[ p() ] =0 t-1 Slide 7 Stochastic Optimization Problem Minimize : lim p(t) Subject to: lim (t) p(t) in {0, 1} for all slots t p* = ergodic optimal average power Define: Fix >0. -approximation on slot t if: p(t) p* + (t) - Challenge: Unknown probabilities! Slide 8 Prior algorithms and analysis E[Q] T Neely 03, 06 (DPP) O(1/) O(1/ 2 ) Georgiadis et al. 06 Neely, Modiano, Li 05, 08: O(1/) O(1/ 2 ) Neely 07: O(log(1/)) O(1/ 2 ) Huang et. al. 13 (DPP-LIFO): O(log 2 (1/)) O(1/ 2 ) Li, Li, Eryilmaz 13, 15: O(1/) O(1/ 2 ) (additional sample path results) Slide 9 Prior algorithms and analysis E[Q] T Neely 03, 06 (DPP) O(1/) O(1/ 2 ) Georgiadis et al. 06 Neely, Modiano, Li 05, 08: O(1/) O(1/ 2 ) Neely 07: O(log(1/)) O(1/ 2 ) Huang et. al. 13 (DPP-LIFO): O(log 2 (1/)) O(1/ 2 ) Li, Li, Eryilmaz 13, 15: O(1/) O(1/ 2 ) (additional sample path results) Huang et al. 14: O(1/ 2/3 ) O(1/ 1+2/3 ) Slide 10 Main Results 1.Lower Bound: No algorithm can do better than O(1/) convergence time. 2.Upper Bound: Provide tighter analysis to show that Drift-Plus-Penalty (DPP) algorithm achieves: Convergence Time: T = O( log(1/) / ) Average queue size: E[Q] O( log(1/) ) Slide 11 Part 1: (1/) Lower Bound for all Algorithms Example system: (t) in {1, 2, 3} Pr[(t) = 3], Pr[(t) = 2], Pr[(t) = 1] unknown. Proof methodology: Case 1: Pr[ transmit | (0) = 2 ] > . o Assume Pr[(t) = 3] = Pr[(t) = 2] = . o Optimally compensate for mistake on slot 0. Case 2: Pr[ transmit | (0) = 2 ] . o Assume different probabilities. o Optimally compensate for mistake on slot 0. Slide 12 Case 1: Fix =1, > 0 Rate E[(t)] Power E[p(t)] X 1 0 0 1 h() curve Slide 13 Case 1: Fix =1, > 0 Rate E[(t)] Power E[p(t)] A X 1 0 0 1 (E[(0)], E[p(0)]) is in this region. Slide 14 Case 1: Fix =1, > 0 Rate E[(t)] Power E[p(t)] A X 1 0 0 1 (E[(0)], E[p(0)]) is in this region. Slide 15 Case 1: Fix =1, > 0 Rate E[(t)] Power E[p(t)] A X 1 0 0 1 (E[(0)], E[p(0)]) is in this region. Slide 16 Case 1: Fix =1, > 0 Rate E[(t)] Power E[p(t)] A 1 0 0 1 (E[(0)], E[p(0)]) is in this region. X Optimal compensation Requires time (1/). Slide 17 Part 2: Upper Bound Channel states 0 < 1 < 2 < < M General h() curve (piecewise linear) Power E[p(t)] Rate E[(t)] h() curvep* Slide 18 Part 2: Upper Bound Channel states 0 < 1 < 2 < < M General h() curve (piecewise linear) Rate E[(t)] h() curve Transmit iff (t) -1 Transmit iff (t) Power E[p(t)] Slide 19 Drift-Plus-Penalty Alg (DPP) (t) = Q(t+1) 2 Q(t) 2 Observe (t), choose p(t) to minimize: (t) + V p(t) Weighted penaltyDrift Slide 20 Drift-Plus-Penalty Alg (DPP) (t) = Q(t+1) 2 Q(t) 2 Observe (t), choose p(t) to minimize: (t) + V p(t) Weighted penaltyDrift Algorithm becomes: P(t) = 1 if Q(t)(t) V P(t) = 0 else Q(t) (t) Slide 21 Drift Analysis of DPP Transmit iff (t) -1 Transmit iff (t) V/ k V/ k+1 V/ k-1 0 Q(t) Positive driftNegative drift 0 < 1 < 2 < < M Slide 22 Useful Drift Lemma (with transients) Z(t) Negative drift: - 0 Lemma: E[e rZ(t) ] D + (e rZ(0) D) t steady state transient Apply 1: Z(t) = Q(t) Apply 2: Z(t) = V/ k Q(t) Slide 23 After transient time O(V) we get: V/ k V/ k+1 V/ k-1 0 Q(t) Positive driftNegative drift Pr[ Red intervals ] = O(e -cV ) Choose V = log(1/) Pr[ Red ] = O() Slide 24 After transient time O(V) we get: V/ k V/ k+1 V/ k-1 0 Q(t) Positive driftNegative drift Pr[ Red intervals ] = O(e -cV ) Slide 25 Analytical Result But queue is stable, so E[] = + O(). So we timeshare appropriately and: E[Q(t)] O( log(1/) ) T O( log(1/) / ) p* Slide 26 Simulation: E[p] versus queue size Slide 27 Simulation: E[p] versus time Slide 28 Non-ergodic simulation (adaptive to changes) Slide 29 Conclusions Fundamental lower bound on convergence time o Unknown probabilities o Cramer-Rao like bound for controlled queues Tighter drift analysis for DPP algorithm: o -approximation to optimal power o Queue size O( log(1/) ) [optimal] o Convergence time O( log(1/)/ ) [near optimal]