Randomness Extractors: Motivation, Applications and Constructions

40
Randomness Extractors: Motivation, Applications and Constructions Ronen Shaltiel University of Haifa

description

Randomness Extractors: Motivation, Applications and Constructions. Ronen Shaltiel University of Haifa. Outline of talk. Extractors as graphs with expansion properties Extractors as functions which extract randomness Applications Explicit Constructions. - PowerPoint PPT Presentation

Transcript of Randomness Extractors: Motivation, Applications and Constructions

Page 1: Randomness Extractors: Motivation, Applications and Constructions

Randomness Extractors: Motivation, Applications and Constructions

Ronen ShaltielUniversity of Haifa

Page 2: Randomness Extractors: Motivation, Applications and Constructions

Outline of talk1. Extractors as graphs with expansion

properties2. Extractors as functions which extract

randomness3. Applications4. Explicit Constructions

Page 3: Randomness Extractors: Motivation, Applications and Constructions

Extractor graphs: Definition [NZ]

An extractor is an (unbalanced) bipartite graph M<<N. (e.g. M=Nδ, M=exp( (log N)δ ).

Every vertex x on the left has D neighbors.

the extractor is better when D is small. (e.g. D=polylog N)

Convention: N=2n, M=2m, D=2d …

{1,…,N} ≈ {0,1}n

D edgesx

N≈{0,1}n

M≈{0,1}m

E(x,1)E(x,D

)

..

Page 4: Randomness Extractors: Motivation, Applications and Constructions

Extractor graphs: expansion properties

(K,ε)-Extractor:∀set X of size K the dist.E(X,U) ε-close to uniform.

=>“expansion” property:∀set X of size K,|Γ)x)| ≥ (1-ε)M.

Distribution versus Set size

X

N≈{0,1}n

M≈{0,1}m

K Γ(X)

(1-ε(M

*A distribution P is ε-close to uniform if ||P-U||1≤ 2ε => P supports 1-ε elements.

x

Identify X with the uniform

distribution on X

Page 5: Randomness Extractors: Motivation, Applications and Constructions

Extractors and Expander graphs

X

N≈{0,1}n

M≈{0,1}m

Γ(X)

(1-ε(M

Extractor

N≈{0,1}n

X Γ(X)

D=2d edges

(1+δ-(Expander

K(1+δ(KK

N≈{0,1}n

Page 6: Randomness Extractors: Motivation, Applications and Constructions

Requires degree

log N

Allows constant degree

Extractors and Expander graphs

X

N≈{0,1}n

M≈{0,1}m

Γ(X)

(1-ε(M

Extractor

N≈{0,1}n

X Γ(X)

(1+δ-(Expander

(1+δ(K

N≈{0,1}n

Balanced graph

Unbalanced graph

Absolute expansion:

K -> (1+δ)K

Relative expansion:

K -> (1-ε)MK/N -> (1-

ε)

Expands sets

smaller than

threshold K

Expands sets larger

than threshold K

KK

Page 7: Randomness Extractors: Motivation, Applications and Constructions

Outline of talk1. Extractors as graphs with expansion

properties2. Extractors as functions which extract

randomness3. Applications4. Explicit Constructions

Page 8: Randomness Extractors: Motivation, Applications and Constructions

Successful Paradigm in CS: Probabilistic Algorithms.Probabilistic Algorithms/Protocols: Use an additional input stream of independent coin tosses.Helpful in solving computational problems.Where can we get random bits?

The initial motivation: running probabilistic algorithms with “real-life” sources

We have access to distributions in nature:

Electric noise Key strokes of user Timing of past eventsThese distributions are

“somewhat random” but not “truly random”.

Paradigm: [SV,V,VV,CG,V,CW,Z]. Randomness Extractors

Assumption for this talk: Somewhat random = uniform over subset of size K.

random coins

Probabilistic algorithm input

output

RandomnessExtractor

Somewhat random

Page 9: Randomness Extractors: Motivation, Applications and Constructions

Parameters: (function view) Source length: n (= log N) Seed length: d ~ O(log n) Entropy threshold: k ~ n/100 Output length: m ~ k Required error: ε ~ 1/100

We allow an extractor to also receive an additional input of (very few) random bits.

Extractors use few random bits to extract many random bits from arbitrary distributions which “contain” sufficient randomness.

Extractors as functions that use few bits to extract randomness

source distribution X

Extractor seed Y

random output

Randomness

Definition: A (K,ε)-extractor is a function E(x,y) s.t. For every set. X of size K, E(X,U) is ε-close* to uniform .

Lower bounds [NZ,RT]: seed length (in bits) ≥ log nProbabilistic method [S,RT]: Exists optimal extractor which matches lower bound and extracts all the k=log K random bits in the source distribution.Explicit constructions: E(x,y) can be computed in poly-time.

Page 10: Randomness Extractors: Motivation, Applications and Constructions

Simulating probabilistic algorithms using weak random sources

Goal: Run prob algorithm using a somewhat random distribution.

Where can we get a seed?Idea: Go over all seeds. Given a source element x. ∀y compute zy= E(x,y) Compute Alg(input,zy) Answer majority vote.Seed=O(logn) => poly-timeExplicit constructions.

Probabilistic algorithm input

output

random coins

RandomnessExtractor seed

Somewhat random

Page 11: Randomness Extractors: Motivation, Applications and Constructions

Outline of talk1. Extractors as graphs with expansion

properties2. Extractors as functions which extract

randomness3. Applications4. Explicit Constructions

Page 12: Randomness Extractors: Motivation, Applications and Constructions

Applications Simulating probabilistic algorithms using

weak sources of randomness [vN,SV,V,VV,CG,V,CW,Z].

Constructing Graphs (Expanders, Super-concentrators) [WZ].

Oblivious sampling [S,Z]. Constructions of various pseudorandom

generators [NZ,RR,STV,GW,MV]. Distributed algorithms [WZ,Z,RZ]. Cryptography [CDHK,L,V,DS,MST]. Hardness of approximations [Z,U,MU]. Error correcting codes [TZ].

Page 13: Randomness Extractors: Motivation, Applications and Constructions

Expanders that beat the eigenvalue bound [WZ]

Goal: Construct low deg expanders with huge expansion.

Line up two low degree extractors.∀set X of size K ,|Γ)x)| ≥ (1-ε)M > M/2.∀sets X,X’ of size KX and X’ have common neighbour. Contract middle layer. Low degree (ND2/K) bipartite

graph in which every set of size K sees N-K vertices.

Better constructions for large K [CRVW].

N≈{0,1}n

N≈{0,1}n

X

X’

Page 14: Randomness Extractors: Motivation, Applications and Constructions

v1v2 v3

vD

Randomness efficient (oblivious) sampling using expanders

Random walk variables v1..vD behave like i.i.d:

∀A of size ½M Hitting property:

Pr[∀i : vi∊A] ≤ δ = 2-Ω(D). Chernoff style property:

Pr[#i : vi∊A far from exp.] ≤ 2-Ω(D). # of random bits used for walk:

m+O(D)=m+O(log(1/δ)) # of random bits for i.i.d. m∙D=m ∙ O(log(1/δ))

M≈{0,1}m

Random walk on constant degree

expander

Page 15: Randomness Extractors: Motivation, Applications and Constructions

Randomness efficient (oblivious) sampling using extractors [S]

Given parameters m,δ: Use E with K=M=2m, N=M/δ

and small D. Choose random x:

m+log(1/δ) random bits. Set vi=E(x,i)Ext property ⇒ Hitting property∀A of size ½MCall x bad if E(x) inside A.# of bad x’s < KPr[x is bad] < K/N = δ

D edgesx

N≈{0,1}n

M≈{0,1}m

E(x,1)E(x,D)

..

bad x’s

(1-ε(M

A

Page 16: Randomness Extractors: Motivation, Applications and Constructions

Every (oblivious) sampling scheme yields an extractor

An (oblivious) sampling scheme uses a random n bit string x to generated D random variables with Chrnoff style property.

Thm: [Z] The derived graph is an extractor.

Extractors oblvs Sampling

D=2d edgesx

N≈{0,1}n

M≈{0,1}m

v1

vD

..

Page 17: Randomness Extractors: Motivation, Applications and Constructions

Outline of talk1. Extractors as graphs with expansion

properties2. Extractors as functions which extract

randomness3. Applications4. Explicit Constructions

Page 18: Randomness Extractors: Motivation, Applications and Constructions

Constructions

Page 19: Randomness Extractors: Motivation, Applications and Constructions

Extractors from error correcting codes

Can construct extractors from error-correcting code [ILL,SZ,T].

Short seed. Extract one additional bit Extractors that extract one additional bit

List-decodable error-correcting codes Extractors that extract many bits codes

with strong list-recovering properties [TZ].

Page 20: Randomness Extractors: Motivation, Applications and Constructions

20% errors

List-decodable error-correcting codes [S]

encoding

noisy channel

decoding

x xEC(x) EC(x)’

x encoding

EC(x)extremely

noisy channel

EC(x)’x1

x2

x3

List decodin

g

49% errors

• EC(x) is 20%-decodable if for every w there is a unique x s.t. EC(x) differs from w in 20% of positions.• EC(x) is (49%,t)-list-decodable if for every w there are at most t x’s s.t. EC(x) differs from w in 49% of positions.• There are explicit constructions of such codes.

Page 21: Randomness Extractors: Motivation, Applications and Constructions

Extractors from list-decodable error-correcting codes [ILL,T]

Thm: If EC(x) is (½-ε,εK)-list-decodable then E(x,y)=(y,EC(x)y) is a (K,2ε)-extractor.

Note: E outputs its seed y. Such an extractor is called “strong”.

E outputs only one additional output bit EC(x)y There are constructions of list-decodable error

correcting codes with |y|=O(log n). Strong extractors with one additional bit

List-decodable error correcting codes. Strong extractors with many additional bits

translate into very strong error correcting codes [TZ].

Page 22: Randomness Extractors: Motivation, Applications and Constructions

Extractors from list-decodable error-correcting codes: proof

Thm: If EC(x) is (½-ε,εK)-list-decodable then E(x,y)=(y,EC(x)y) is a (K,2ε)-extractor.

Proof: by contradiction. Let X be a distribution/set of size K s.t.

E(X,Y)=(Y,EC(X)Y) is far from uniform.Observation: Y and EC(X)Y are both uniform. They are correlated. Exists P s.t. P(Y)=EC(X)Y with prob > ½+2ε.

Page 23: Randomness Extractors: Motivation, Applications and Constructions

Extractors from list-decodable error-correcting codes: proof II

Thm: If EC(x) is (½-ε,εK)-list-decodable then E(x,y)=(y,EC(x)y) is a (K,2ε)-extractor.

Exists P s.t.PrX,Y[P(Y)=EC(X)Y] > ½+2ε.

By a Markov argument: For εK x’s in XPrY[P(Y)=EC(x)Y] > ½+ε.

Think of P as a string Py=P(y).We have that P and EC(x) differ in ½-ε

coordinates.Story so far: If E is bad then there is a string P s.t.

for εK x’s P and EC(x) differ in few coordinates.

Page 24: Randomness Extractors: Motivation, Applications and Constructions

Extractors from list-decodable error-correcting codes: proof III

Thm: If EC(x) is (½-ε,εK)-list-decodable then E(x,y)=(y,EC(x)y) is a (K,2ε)-extractor.

Story so far: If E is bad then there is a string P s.t. for εK x’s P and EC(x) differ in ½-ε coordinates.

x encoding

EC(x) noisy channel

P=EC(x)’

x1

x2

x3

List decodin

g

49% errors

By list-decoding properties of the code: # of such x’s < εK.Contradiction!

Page 25: Randomness Extractors: Motivation, Applications and Constructions

Roadmap Can construct extractors from

error-correcting code. Short seed. Output = Seed + 1. Next: How to extract more bits. General paradigm: Once you construct

one extractor you can try to boost its quality.

Page 26: Randomness Extractors: Motivation, Applications and Constructions

Y’

Starting point: An extractor E that extracts only few bits.

Idea: (X|E(X,Y)) contains randomness.

We can apply E to extract randomness from (X|E(X,Y)).

Need a “fresh” seed.E’(X;(Y,Y’))=E(X,Y),E(X,Y’)Extract more

randomness.Use larger seed.

Extracting more bits [WZ] X

Extractor Y

Z Z Z

X

New Extractor Y Y’

Page 27: Randomness Extractors: Motivation, Applications and Constructions

Trevisan’s extractor: reducing the seed length

Idea: Use few random bits to generate (correlated) seeds Y1,Y2,Y3…

Walk on expander? Extractor?Works but gives small savings.Trevisan: use Nisan-Wigderson

pseudorandom generator (based on combinatorial designs).

[TZS,SU]: Use Y,Y+1,Y+2,... (based on the [STV] algorithm for

list-decoding Reed-Muller code).

X

Extractor Y1 Y2

Y

Page 28: Randomness Extractors: Motivation, Applications and Constructions

The extractor designer tool kit

Many ways to “compose” extractors with themselves and related objects.

Arguments use “entropy manipulations” depend on “function view” of extractors.

Impact on other graph construction problems: Expander graphs (zig-zag product)

[RVW,CRVW]. Ramsey graphs that beat the Frankl-Wilson

construction [BKSSW,BRSW].

Page 29: Randomness Extractors: Motivation, Applications and Constructions

Y’

Entropy manipulations: composing two extractors [Z,NZ]

X2

SmallExtractor

Z X1

LargeExtractor

Observation: Can compose a small ext. and a large ext. and obtain ext. which inherits small seed and large output.Paradigm: If given only one source try to convert it into

two sources that are “sufficiently independent.”

Two independent

sources

Page 30: Randomness Extractors: Motivation, Applications and Constructions

Summary: Extractors are

X

M≈{0,1}m

K=2k Γ(X)

(1-ε(M

source distribution X

Extractor seed Y

random output

Randomness

Functions Graphs

Page 31: Randomness Extractors: Motivation, Applications and Constructions

Conclusion Unifying role of extractors:

Expanders, Oblivious samplers, Error correcting codes, Pseudorandom generators, hash functions…

Open problems: More applications/connections. The quest for explicitly constructing the optimal

extractor. (Current record [LRVW]). Direct and simple constructions.

Things I didn’t talk about: Seedless extractors for special families of sources.

Page 32: Randomness Extractors: Motivation, Applications and Constructions

That’s it…

Page 33: Randomness Extractors: Motivation, Applications and Constructions
Page 34: Randomness Extractors: Motivation, Applications and Constructions

Extractor graphs

x

N≈{0,1}n

M≈{0,1}m

E(x)1E(x)D

..

D=2d edgesx

D=2d edges

Page 35: Randomness Extractors: Motivation, Applications and Constructions

Extractor graphs: expansion

X

N≈{0,1}n

M≈{0,1}m

K=2k Γ(X)

(1-ε(M

Page 36: Randomness Extractors: Motivation, Applications and Constructions

Issues in a formal definition: 2. One extractor for all sources

Goal: Design one extractor function E(x) that works on all sufficiently high entropy distributions.

Problem: Impossible to extract even 1 bit from distributions with n-1 bits of entropy.

Have to settle for less!

source distribution X

Extractor

random output

Randomness

{0,1}n

x:E(x)=0

x:E(x)=1

Distribution X with entropy n-1 on which E(X) is

fixed

Page 37: Randomness Extractors: Motivation, Applications and Constructions

Parameters: Source length: n Seed length: d ~ O(log n) Entropy threshold: k ~ n/100 Output length: m ~ k Required error: ε ~ 1/100

Definition of extractors [NZ]

We allow an extractor to also receive an additional seed of (very few) random bits.

Extractors use few random bits to extract many random bits from arbitrary distributions with sufficiently high entropy.

source distribution X

Extractor seed Y

random output

Randomness

Definition: A (k,ε)-extractor is a function E(x,y) s.t. For every distribution X with min-entropy k, E(X,Y) is ε-close* to uniform.Lower bounds [NZ,RT]: seed length ≥ log n + 2log(1/ε)Probabilistic method [S,RT]: Exists optimal extractor which matches lower bound and extracts k+d-2log(1/ε) bits.

*A distribution P is ε-close to uniform if ||P-U||1≤ 2ε => P supports 1-ε elements.

Page 38: Randomness Extractors: Motivation, Applications and Constructions

Extractor graphs: Definition [NZ]

An extractor is an (unbalanced) bipartite graph M<<N. (e.g. M=Nδ, M=exp( (log N)δ ).

Every vertex x on the left has D neighbors.

E(x)=(E(x)1,..,E(x)D)

the extractor is better when D is small. (e.g. D=polylog N)

Convention: E(x,y) = E(x)y

D edgesx

N≈{0,1}n

M≈{0,1}m

E(x)1E(x)D

..

Page 39: Randomness Extractors: Motivation, Applications and Constructions

Issues in a formal definition: 1. Notion of entropy

The source distribution X must “contain randomness”

Necessary condition for extracting k bits:

∀x Pr[X=x]≤2-k

Dfn: X has min-entropy k if ∀x Pr[X=x]≤2-k

Example: flat distributions: X is uniformly distributed on a subset of size 2k.

Every X with min-entropy k is a convex combination of flat distributions.

source distribution X

Extractor

random output

Randomness

2k=|S|

{0,1}n

Page 40: Randomness Extractors: Motivation, Applications and Constructions

errors

Noisy channels and error corrections

noisy channel

x x’

Goal: Transmit messages using a noisy channel

Guarantee: x’ differs from x in at most (say) 20% positions.Coding Theory: Encode x prior to transmission.