Approximate List- Decoding and Uniform Hardness Amplification Russell Impagliazzo (UCSD) Ragesh...
-
date post
22-Dec-2015 -
Category
Documents
-
view
221 -
download
2
Transcript of Approximate List- Decoding and Uniform Hardness Amplification Russell Impagliazzo (UCSD) Ragesh...
Approximate List-Decoding and
Uniform Hardness Amplification
Russell Impagliazzo (UCSD)Ragesh Jaiswal (UCSD)
Valentine Kabanets (SFU)
Hardness Amplification
Given a hard function we can get an even harder function
f FHard function Harder function
Hardness
f
{0, 1}n
s
{0, 1}n
• A function f is called δ-hard for circuits of size s (Algorithm with running time t), if any circuit of size s (Algorithm with running time t) makes mistake in predicting the function on at least δ fraction of the inputs
δ.2n
XOR Lemma
f
f f f
XOR
0/1
0/1
{0, 1}n{0, 1}nk
k
fk
fk:{0, 1}nk {0, 1}fk(x1,…, xk) = f(x1) … f(xk)
XOR Lemma: If f is δ-hard for size s circuits, then fk is (1/2 - ε)-hard for size s’ circuits (ε = e-Ω(δk), s’ =
s·poly(δ, ε))
XOR Lemma Proof: Ideal case
C
A
C (which computes f for at least (1 - δ) fraction of inputs)
(which computes fk for at least (½ + ε) fraction of inputs)
whp
XOR Lemma Proof
C
A
C (which computes f for at least (1 - δ) fraction of inputs)
Advice (|Advice|=poly(1/ε))
C1 Cl
One of them computes f for at least (1 - δ) fraction of inputs
l = 2|Advice| = 2poly(1/ε)
(which computes fk for at least (½ + ε) fraction of inputs)
whp
A “lesser” nonuniform reduction
Optimal List Size Question: What is the reduction in the
list size we should target? A good combinatorial answer using
error correcting codes
C
A
C1 Cl
whp
XOR-based Code [T03]Think of a binary message msg on M=2n bits as a truth-table
of a Boolean function f.The code of msg is of length Mk where code(x1,…,xk) = f(x1)
… f(xk)
msg
x (|x| = n)
x = (x1, …, xk)
f(x1) … f(xk)
code
f(x)
List Decoder
m XOR Encoding c w Decoding
m1,…,ml
Decoder•Local•Approximate•List
≈ (1/2 + )
≈ (1 - δ)
Information theoretically l should be O(1/2)
channel
The List Size
•The proof of Yao’s XOR Lemma yields an approximate local list-decoding algorithm for the XOR-code defined above
• But the list size is 2poly(1/) rather than the optimal poly(1/)
• Goal: Match the information theoretic bound on list-decoding i.e. get advice of length log(1/)
The Main Result
The Main Result
C ((½ + ε)-computes fk)
A
C ((1 - δ)-computes f)
• ε = poly(1/k), δ = O(k-0.1)• Running time of A and size of C is at most poly(|C|, 1/ε)
whp
Advice(|Advice| = log(1/ε))
The Main Result
C ((½ + ε)-computes fk)
A
C ((1 - δ)-computes f)
• ε = poly(1/k), δ = O(k-0.1)• Running time of A and size of C is at most poly(|C|, 1/ε)
w.p. poly(ε)
The Main Result
We get a list size of poly(1/ε) … which is optimal but…
ε is large: ε = poly(1/k)
C((½ + ε)-computes fk)
A
C ((1 - δ)-computes f)
A’
C1Cl
At least one of them (1 - ρ)-computes f
l = poly(1/ε)
Advice(|Advice| = log(1/ε))
whpw.p. poly(ε)
Advice efficient XOR Lemma
Uniform Hardness Amplification
Uniform Hardness Amplification
What we want
f hard wrt BPP g harder wrt BPP
What we get
f hard wrt BPP/log g harder wrt BPPAdvice efficient XOR Lemma
Uniform Hardness Amplification
What we can do:
f Є NP: hard wrt BPP f’ Є NP: hard wrt BPP/log[BDCGL92]
g Є ?? harder wrt BPP
Advice efficient XOR Lemma
• g not necessarily Є NP but g Є PNP||
• PNP||: poly-time TM which can make polynomially many parallel Oracle queries to an NP oracle
h Є PNP||: hard wrt BPP
Simple average-case reduction
g Є PNP||: harder wrt BPP
1/nc ½ - 1/nd
Trevisan gives a weaker reduction (from 1/nc to (1/2 – 1/(log n)α) hardness) but within NP.
Techniques
Techniques
Advice efficient Direct Product Theorem
A Sampling Lemma Learning without Advice
Self-generated advice Fault tolerant learning using faulty advice
Direct Product Theorem
f
f f f
concatenation
0/1
{0, 1}k
{0, 1}n
{0, 1}nk
k
fk:{0, 1}nk {0, 1}k
fk(x1,…, xk) = f(x1) | … | f(xk)
fk
• Direct Product Theorem: If f is δ–hard for size s circuits, then fk is (1 - ε)-hard for size s’ circuits (ε = e-Ω(δk), s’ = s·poly(δ, ε))
• Goldreich-Levin Theorem: XOR Lemma and Direct Product Theorem are saying the same thing
XOR Lemma from Direct Product Theorem
CDP (poly(ε)-computes fk)
A2
C ((1 - δ)-computes f)
•ε = poly(1/k), δ = O(k-0.1)
•Using Goldreich-Levin Theorem
C ((½ + ε)-computes fk)
A1
whp
w.p. poly(ε)
LEARN from [IW97]
LEARN [IW97]
CDP (-computes fk)
Advice: n/2 pairs of (x, f(x)) for independent uniform x’s
C ((1 - δ)-computes f)
whp
•ε = e-Ω(δk)
Goal
LEARN [IW97]
CDP (-computes fk)
Advice: n/2 pairs of (x, f(x)) for independent uniform x’s
C ((1 - δ)-computes f)
whp
•ε = e-Ω(δk)
LEARN’
w.p. poly() No advice!!!
•ε = poly(1/k), δ = O(k-0.1)
• We want to eliminate the advice (or the (x, f(x)) pairs). In exchange we are ready to make some compromise on the success probability of the randomized algorithm
Self-generated advice
Imperfect samples We want to use the circuit CDP to
generate n/ pairs (x, f(x)) for independent uniform x’s
We will settle for n/ pairs (x,bx) The distribution on x’s is statistically close
to uniform and for most x’s we have bx= f(x).
Then run a fault-tolerant version of LEARN on CDP and the generated pairs (x,bx)
How to generate imperfect samples
A Sampling Lemma
nk
2nk
x1 x2xkx3
•D is a Uniform Distribution
A Sampling Lemma
nk
G
x1 x2xkx3
•|G| >= 2nk
•Stat-Dist(D, U) <= ((log 1/)/k)1/2
Getting Imperfect Samples
G: subset of inputs on which CDP(x) = fk(x) |G| >= 2nk
Pick a random k-tuple x, then pick a random subtuple x’ of size k1/2 With probability x lands in the “good” set
G Conditioned on this, the Sampling Lemma
says that x’ is close to being uniformly distributed
If k1/2 > the number of samples required by LEARN,then done!
Else…
Direct Product Amplification
CDP CDP’ which poly(ε)-computes fk’
where (k’)1/2 > n/ε2
??
CDP CDP’ such that for at least poly(ε) fraction of k’-tuples, x CDP’(x) and fk’(x) agree on most bits
Putting Everything Together
CDP for fk CDP’ for fk’
DP Amplification
Sampling
Fault tolerant LEARN
pairs (x,bx)
circuit C (1-)-computes f
with probability > poly()
Repeat poly(1/) times to get a list containing a good circuit for f, w.h.p.
Open Questions
Open Questions
Advice efficient XOR Lemma for smaller For ε > exp(-kα) we get a quasi-polynomial list size
Can we get an advice efficient hardness amplification result using a monotone combination function m (instead of )? Some results: [Buresh-Oppenheim, Kabanets,
Santhanam] use monotone list-decodable codes to re-prove Trevisan’s results for amplification within NP
Thank You