1. Precisely define security goal (e.g., secure encryption)
2. Precisely stipulate computational intractability assumption (e.g., hardness of factoring)
3. Security Reduction: prove that any attacker A that break security of scheme π can be used to violate the intractability assumption.
Modern Cryptography
A Celebrated Example:Commitments from OWFs [Naor,HILL]
• Task: Commitment Scheme– Binding + Hiding– Non-interactive
• Intractability Assumption: existence of OWF f– f is easy to compute but hard to invert
• Security reduction [Naor,HILL]: Comf , PPT R s. t. for every algorithm A that breaks hiding of Comf, RA
inverts f– Reduction R only uses attacker A as a black-box; i.e., R is a Turing
Reduction.
rC RA
Security reduction: RA breaks C whenever A breaks Hiding
f(r)
Reduction R may rewind and restart A.
Turing Reductions
Provable Security
• In the last three decades, lots of amazing tasks have been securely realized under well-studied intractability assumptions
– Key Exchange, Public-Key Encryption, Secure Computation, Zero-Knowledge, PIR, Secure Voting, Identity based encryption, Fully homomorphic Encryption, Leakage-resilient Encryption…
• But: several tasks/schemes have resisted security reductions under well-studied intractability assumptions.
Schnorr’s Identification Scheme [Sch’89]
• One of the most famous and widely employed identification schemes (e.g., Blackberry router protocol)
• Secure under a passive “eaves-dropper” attack based on the discrete logarithm assumption
• What about active attacks?– [BP’02] proven it secure under a new type of “one-more”
inversion assumption– Can we based security on more standard assumptions?
•
Commitment Schemes under Selective Opening [DNRS’99]
• A commits to n values v1, …, vn
• B adaptively asks A to open up, say, half of them.• Security: Unopened commitments remain hidden
– Problem originated in the distributed computing literature over 25 years ago
• Can we base selective opening security of non-interactive commitments on any standard assumption?
•
One-More Inversion Assumptions [BNPS’02]
• You get n target points y1,…, yn in group G with generator g.• Can you find the discrete logarithm to all n of them if you may make n-
1 queries to a discrete logarithm oracle (for G and g)
• One-more DLOG assumption states that no PPT algorithm can succeed with non-negligible probability– [BNPS] and follow-up work: Very useful for proving security of practical schemes
• Can the one-more DLOG assumption be based on more standard assumptions?– What about if we weaken the assumption and only give the attacker n^eps
queries?
•
Unique Non-interactive Blind Signatures [Chaum’82]
• Signature Scheme where a user U may ask the signer S to sign a message m, while keeping m hidden from S.– Futhermore, there only exists a single valid signature per message– Chaum provided a first implementation in 1982; very useful in e.g.,
E-cash– [BNPS] give a proof of security in the Random Oracle Model based
on a one-more RSA assumption.
• Can we base security of Chaum’s scheme, or any other unique blind signature scheme, on any standard assumption?
•
Sequential Witness Hiding of O(1)-round public-coin protocols
• Take any of the classic O(1)-round public-coin ZK protocols (e.g., GMW, Blum)
• Repeat them in parallel to get negligible soundness error.• Do they suddenly leak the witness to the statement
proved? [Feige’90]– Sequential WH: No verifier can recover the witness after
sequentially participating in polynomially many proofs.
• Can sequential WH of those protocols be based on any standard assumption?
Main Result
• For a general class of intractability assumptions, there do NOT exists Turing security reductions for demonstrating security of any those schemes/tasks/assumptions
• Any security reduction R itself must constitutes an attack on assumption
Intractability Assumptions
• Following [Naor’03], we model an intractability assumption as a interaction between a Challenger C and an attacker A.
– The goal of A is to make C accept– C may be computationally unbounded (different from [Naor’03], [GW’11])– The only restriction is that the number of communication rounds is an a-priori
bounded polynomial.
rC Af(r)
Intractability assumption (C,t) : “no PPT can make C output 1 w.p. significantly above t”
E.g., 2-round: f is a OWF, Factoring, G is a PRG, DDH, Factoring, …O(1)-round: Enc is semantically secure (FHE), (P,V) is WH, O(1)-round with unbounded C: (P,V) is sound
Main Theorem
Let (C,t) be a k(.)-round intractability assumption where k is a polynomial. If there exists a PPT reduction R for basing security of any of previously mentioned schemes, on the hardness of (C,t), then there exists a PPT attacker B that breaks (C,t)
Note: restriction on C being bounded-round is necessary; otherwise we include the assumptions that the schemes are secure!
Related Work
• Several earlier lower bounds:– One-more inversion assumptions [BMV’08]– Selective opening [BHY’09]– Witness Hiding [P’06,HRS’09,PTV’10]– Blind Signatures [FS’10]
• But they only consider restricted types of reductions (a la [FF’93,BT’04]), or (restricted types of) black-box constructions (a la [IR’88])– Only exceptions [P’06,PTV’10] provide conditional lower-bounds
on constructions of certain types of WH proofs based on OWF
• Our result applies to ANY Turing security reduction and also non-black-box constructions.
Proof Outline
1. Sequential Witness Hiding is “complete”– A positive answer to any of the questions implies the existence of a
“special” O(1)-round sequential WH proof/argument for a language with unique witnesses.
2. Sequential WH of “special” O(1)-round proofs/arguments for languages with unique witnesses cannot be based on poly-round intractability assumptions using a Turing reduction.
Special-sound proofs [CDS,Bl]
X is true X is true
a a
b0
c0
b1
c1
Can extract a witness w
b0, b1 R {0,1}n
Relaxations:• multiple rounds• computationally sound protocols (a.k.a. arguments)• need p(n) transcripts (instead of just 2) to extract w
Generalized special-sound
Main Lemma
• Let (C,t) be a k(.)-round intractability assumption where k is a polynomial. Let (P,V) be a O(1)-round generalized special-sound proof of a language L with unique witnesses.
• If there exists a PPT reduction R for basing sequential WH of (P,V) on the hardness of (C,t), then there exists a PPT attacker B that breaks (C,t)
Proof Idea
rC RA
Assume RA breaks C whenever A completely recovers witness of any statement x it hears sufficiently many sequential proofs of.
f(r)
Goal: Emulate in PPT a successful A’ for R thus break C in PPT(the idea goes back to [BV’99] “meta-reduction”, and even earlier [Bra’79])
Proof Idea
rC RAssume RA breaks C whenever A breaks seq WH of some special-sound proof for language with unique witness
f(r)
Assume reduction R is “nice” [BMV’08,HRS’09,FS’10]• Only asks a single query to its oracle (or asks queries sequentially)• Then, simply “rewind” R feeding it a new “challenge” and extract witness
x
w
Unique witness requirement crucial to ensure we emulate a good oracle A’
General Reductions: Problem I
R
x1
Problem: R might nest its oracle calls. “naïve extraction” requires exponential time (c.f., Concurrent ZK [DNS’99])
Solution: If we require R to provide many sequential proofs, then we can find(recursively) find one proof where nesting depth is “small”
Use Techniques reminiscent of Concurrent ZK a la [RK’99], [CPS’10]
x2
x3rewinding here:
redo work of nested sessions
w2
w3
w1
General Reductions: Problem II
Problem: R might not only nest its oracle calls, but may also rewind its oracle Special-soundness might no longer hold under such rewindings.
Solution: Pick messages from oracle using hashfunction.
Use Techniques reminiscent of Black-box ZK lower-bound of [GK’90],[P’06]
O(1)-round restriction on (P,V) is here crucial
General Reductions: Problem III
C R
x
w
Problem: Oracle calls may be intertwined with interaction with C
Solution: If we require R to provide many sequential proofs, then at least one proof is guaranteed not to intertwine
1. Security of several “classic” cryptographic tasks/schemes---which are believed to be secure---cannot be proven secure (using Turing reduction) based on “standard” intractability assumptions.
2. Establish a connection between lower-bounds for security reductions and Concurrent Security
In Sum
The GOOD:Provably secure under standard assumptions
The BAD:broken
The ANNOYING : not broken, not provably secure*…but very efficient
Ways Around It?
Super Polynomial Security Reductions: • Basing security on “super-poly” intractability assumptions• Possible to overcome some, but not all, lower-bounds
Full characterization in the paper.
Non-black-box security reductions:• Allow R to look at the code of A• Our lower-bound do NOT apply
Possible to overcome the Main Lemma [B’01,PR’06]
PPT Turing security reductions provide stronger security guarantees:any attacker---even if I don’t know the description of his brain--with reproducible behavior can be be efficiently used to break challenge
New types of assumptions? Instead of intractability, tractability [W’10]?“knowledge”-assumptions? Hard to “falsify” [Naor’03]
Top Related