# Near-Optimal Erasure List- Decodable · PDF file Near-Optimal Erasure List-Decodable Codes...

date post

21-Apr-2020Category

## Documents

view

4download

0

Embed Size (px)

### Transcript of Near-Optimal Erasure List- Decodable · PDF file Near-Optimal Erasure List-Decodable Codes...

Near-Optimal Erasure List- Decodable Codes

Dean Doron (Tel-Aviv University) Workshop on Coding and Information Theory, CMSA

Joint work with Avi Ben-Aroya Amnon Ta-Shma

• A code C : {0,1}n → {0,1}n̄ is (1-ε,L) erasure list- decodable, if

• Given a codeword

Erasure List-Decodable Codes

0 1 1 0 0 0 1 0 1 1

n̄

• A code C : {0,1}n → {0,1}n̄ is (1-ε,L) erasure list- decodable, if

• Given a codeword where all but ε fraction of its coordinates were (adversarially) erased,

Erasure List-Decodable Codes

0 1 1 0 0 0 1 0 1 1

n̄

• A code C : {0,1}n → {0,1}n̄ is (1-ε,L) erasure list- decodable, if

• Given a codeword where all but ε fraction of its coordinates were (adversarially) erased,

Erasure List-Decodable Codes

? 1 ? ? ? 0 1 ? ? 1

n̄

• A code C : {0,1}n → {0,1}n̄ is (1-ε,L) erasure list- decodable, if

• Given a codeword where all but ε fraction of its coordinates were (adversarially) erased,

Erasure List-Decodable Codes

? 1 ? ? ? 0 1 ? ? 1

1 1 1 0 1 0 1 0 0 1

0 1 1 0 0 0 1 0 1 1

0 1 0 0 0 0 1 1 1 1

⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮

0 1 1 1 1 0 1 1 0 1

n̄there exists a list of size at most L that contains the original codeword.

• A code C : {0,1}n → {0,1}n̄ is (1-ε,L) erasure list- decodable, if

• Given a codeword where all but ε fraction of its coordinates were (adversarially) erased,

Erasure List-Decodable Codes

? 1 ? ? ? 0 1 ? ? 1

1 1 1 0 1 0 1 0 0 1

0 1 1 0 0 0 1 0 1 1

0 1 0 0 0 0 1 1 1 1

⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮

0 1 1 1 1 0 1 1 0 1

n̄there exists a list of size at most L that contains the original codeword.

• We work over the binary field.

• The two key parameters:

• The code’s rate R = n̄/n.

• The list-size L.

Bounds for the binary case

Bounds on the rate

• The code’s rate — R.

Bounds on the rate

• The code’s rate — R.

• Can be as large as ε [Guruswami 03] (recall that ε is the fraction of coordinates we keep).

• Compare with the errors model, where ~ ε2 is necessary for a code of distance ½ - ε.

Bounds on the rate

Bounds on the list-size

• The list-size — L.

Bounds on the list-size

• The list-size — L.

• Can be as tiny as log(1/ε) [Guruswami 03].

• Compare with the errors model, where poly(1/ε) is necessary.

Bounds on the list-size

• The list-size — L.

• Can be as tiny as log(1/ε) [Guruswami 03].

• Compare with the errors model, where poly(1/ε) is necessary.

• For linear codes, the list-size is at least 1/ε [Guruswami 03].

Bounds on the list-size

Previous Results

• Any binary code of distance ½ - ε can handle 1-2ε erasures with polynomial list size.

Previous Results

• Any binary code of distance ½ - ε can handle 1-2ε erasures with polynomial list size.

• Drawback: We have upper bounds. Cannot break the rate-ε2 bound that way [MRRW,…].

Previous Results

• Any binary code of distance ½ - ε can handle 1-2ε erasures with polynomial list size.

• Drawback: We have upper bounds. Cannot break the rate-ε2 bound that way [MRRW,…].

• Few explicit constructions (especially for binary codes).

Previous Results

Previous Results

Rate R List-Size L

Non-explicit ε log(1/ε)

Optimal codes with distance ½-ε ε

2 poly(1/ε)

Guruswami 01 (constant ε) ε

2/log(1/ε) 1/ε

Guruswami-Indyk 03 (constant ε) ε

2/log(1/ε) log(1/ε)

• In the small ε regime, we break the ε2 barrier, and manage to do this with nearly optimal parameters.

Our Result

• In the small ε regime, we break the ε2 barrier, and manage to do this with nearly optimal parameters.

Our Result

For every small enough ε and a constant γ there exists an explicit binary erasure list-decodable code with rate

R = ε1+ γ

and list-size

L = poly(log(1/ε))

Previous Results

Rate R List-Size L

Non-explicit ε log(1/ε)

Optimal codes with distance ½-ε ε

2 poly(1/ε)

Guruswami 01 (constant ε) ε

2/log(1/ε) 1/ε

Guruswami-Indyk 03 (constant ε) ε

2/log(1/ε) log(1/ε)

Our Result (small ε) ε

1+γ poly(log(1/ε))

Different Perspectives

Erasure List-

Decodable Codes

Ramsey Graphs

Strong 1- bit

Dispersers

[Guruswami 04]

[Gradwohl, Kindler, Reingold, Ta-Shma 05]

Different Perspectives

Erasure List-

Decodable Codes

Ramsey Graphs

Strong 1- bit

Dispersers

[Guruswami 04]

[Gradwohl, Kindler, Reingold, Ta-Shma 05]

Different Perspectives

Erasure List-

Decodable Codes

Ramsey Graphs

Strong 1-bit

Dispersers

[Guruswami 04]

[Gradwohl, Kindler, Reingold, Ta-Shma 05]

Different Perspectives

Erasure List-

Decodable Codes

Ramsey Graphs

Strong 1-bit

Dispersers

[Guruswami 04]

[Gradwohl, Kindler, Reingold, Ta-Shma 05]

Different Perspectives

Erasure List-

Decodable Codes

Ramsey Graphs

Strong 1-bit

Dispersers

[Guruswami 04]

[Gradwohl, Kindler, Reingold, Ta-Shma 05]

1-bit Strong Dispersers

Disp: {0,1}n × {0,1}d →{0,1}

{0,1}n

0

1

Disp(x,y)

{0,1}

D=2dx

• For every (n,k)-source X, or a set X⊆{0,1}n of size at least K=2k,

1-bit Strong Dispersers

Disp: {0,1}n × {0,1}d →{0,1}

{0,1}n

0

1

X Disp(x,y)

{0,1}

D=2dx

• For every (n,k)-source X, or a set X⊆{0,1}n of size at least K=2k,

1-bit Strong Dispersers

Disp: {0,1}n × {0,1}d →{0,1}

{0,1}n

0

1

X

For every x, Pr[X = x] ≤ 2-k.

Disp(x,y)

{0,1}

D=2dx

• For every (n,k)-source X, or a set X⊆{0,1}n of size at least K=2k,

1-bit Strong Dispersers

Disp: {0,1}n × {0,1}d →{0,1}

{0,1}n

0

1

X Disp(x,y)

{0,1}

D=2dx

• For every (n,k)-source X, or a set X⊆{0,1}n of size at least K=2k,

• For all but ε-fraction of the seeds y∈{0,1}d,

Disp(X,y) = {0,1}

1-bit Strong Dispersers

Disp: {0,1}n × {0,1}d →{0,1}

{0,1}n

0

1

X Disp(x,y)

{0,1}

D=2dx

• For every (n,k)-source X, or a set X⊆{0,1}n of size at least K=2k,

• For all but ε-fraction of the seeds y∈{0,1}d,

Disp(X,y) = {0,1}

1-bit Strong Dispersers

Disp: {0,1}n × {0,1}d →{0,1}

{0,1}n

0

1

X Disp(x1,ybad)=1

Disp(xK,ybad)=1

⋮

{0,1}

• For every (n,k)-source X, or a set X⊆{0,1}n of size at least K=2k,

• For all but ε-fraction of the seeds y∈{0,1}d,

Disp(X,y) = {0,1}

1-bit Strong Dispersers

Disp: {0,1}n × {0,1}d →{0,1}

{0,1}n

0

1

X

⋮

Disp(x1,y)=0

Disp(xK,y)=1

{0,1}

Bounds for1-bit Strong Dispersers

• Dispersers are an important tool in derandomization.

Bounds for1-bit Strong Dispersers

• Dispersers are an important tool in derandomization.

• The two key parameters:

Bounds for1-bit Strong Dispersers

• Dispersers are an important tool in derandomization.

• The two key parameters:

• The disperser’s seed length — d.

• The disperser’s entropy requirement — k (or, how small can X be).

Bounds for1-bit Strong Dispersers

Bounds on the seed length

• The disperser’s seed length — d.

Bounds on the seed length

• The disperser’s seed length — d.

• Can be as small as log(n)+log(1/ε) [Radhakrishnan,Ta- Shma 00, Meka,Reingold,Zhou 14].

Bounds on the seed length

Bounds on the entropy

• The disperser’s entropy requirement — k (or, how small can X be).

• Can be as tiny as loglog(1/ε). That is, an optimal disperser works for extremely small sets — of size O(log(1/ε)) [Radhakrishnan,Ta-Shma 00, Meka,Reingold,Zhou 14].

Bounds on the entropy

Comparison with extractors

• Dispersers’ parameters outperform seeded extractors, in which we don

*View more*