Advanced Algorithms (6311) Gautam Das

11
Advanced Algorithms (6311) Gautam Das Notes: 04/28/2009 Ranganath M R.

description

Advanced Algorithms (6311) Gautam Das. Notes: 04/28/2009 Ranganath M R. Outline. Tail Inequalities Markov distribution Chebyshev’s Inequality Chernoff Bounds. Tail Inequalities. Markov Distribution says that the probability of X being greater that t is p(X>t) < µ/t. - PowerPoint PPT Presentation

Transcript of Advanced Algorithms (6311) Gautam Das

Page 1: Advanced Algorithms (6311) Gautam  Das

Advanced Algorithms (6311)Gautam Das

Notes: 04/28/2009Ranganath M R.

Page 2: Advanced Algorithms (6311) Gautam  Das

Outline

• Tail Inequalities• Markov distribution• Chebyshev’s Inequality• Chernoff Bounds

Page 3: Advanced Algorithms (6311) Gautam  Das

Tail Inequalities

Markov Distribution says that the probability of X being greater that t is

p(X>t) < µ/t

Page 4: Advanced Algorithms (6311) Gautam  Das

• Chebyshev’s inequalityP(|X-µ| > t.σ) ≤ 1/t2

Page 5: Advanced Algorithms (6311) Gautam  Das

Chernoff bounds• This is a specific distribution where we can obtain

much sharper tail inequalities (exponentially sharp). If the trials are repeated more, the chances of getting very accurate results are more. Lets see how this is possible.

• Example: imagine we have n coins (X1……Xn ). and let the probabilities of each coins (say heads) be

(p1…pn).Now the Randon variable X = ∑n

i=1 xi

and µ = E[X] = ∑ni=1 pi in general.

Page 6: Advanced Algorithms (6311) Gautam  Das

• Some special cases– All coins are equally unbiased i.e. pi= ½.– µ = n * pi = n*1/2 = n/2, σ= √n/2, Example for n = 100,

σ= √100/2 = 5

Page 7: Advanced Algorithms (6311) Gautam  Das

• Chernoff bounds is given by– If E[X] > µ

• P(X- µ ≥ § µ) ≤ [e § /(1 + §)1 + §] µ -----------------eqn 1– If µ > E[X]

• P(µ - X ≥ § µ) ≤ e µ§2 /2

– Here the µ is the power of right hand side expression. Hence If more trials are taken, µ = n/2 increaese, hence we get accurate results as the expression (e § /(1 + §)1 + §) would be less than 1.

– Example problem to illustrate this.• Probability of a team winning is 1/3 .• What is the probability that the team will win 50 out of the 100

games

Page 8: Advanced Algorithms (6311) Gautam  Das

• µ = n* pi = 100 * 1/3 = 100/3• σ= (no of games to win - µ)/ µ• = (50 – 100/3)/(100/3) = ½.• Now to calculating probability of winning we

need to substitute all these in eqn 1.– [e ½ /(3/2 3/2 )]100/3 = 0.027 (approx).

– Here if we increase no of games(in general no of trials), the µ increases, and the expression [e § /(1 + §)1

+ §] evaluates to less than 1. hence we get more accurate results, when more trails are done.

Page 9: Advanced Algorithms (6311) Gautam  Das

Derivation

• Let X = ∑ni=1 xi

• Let Y = etX

• P(X- µ ≥ §µ) = P(X ≥ (1 + §) µ ) = P(Y ≥ et (1 + §) µ) ≤ E[Y]/ et (1 + §) µ

Now E[Y] = E[etX] = e tX1+ tX2 +…+tXn

= E[etX1]* E[etX2]*………………* E[etXn]

Page 10: Advanced Algorithms (6311) Gautam  Das

• Now lets consider = E[etXi]• Xi is either 0 or 1• Xi will be 0 with probability 1-Pi and 1 with probability Pi.

Page 11: Advanced Algorithms (6311) Gautam  Das

= P(et) + (1 - Pi)(1) [if x = 1 then y =et if x =0 then y = 1]

to be continued in next class