YALE UNIVERSITY
DEPARTMENT OF COMPUTER SCIENCE

CPSC 461b: Foundations of Cryptography Notes 9 (rev. 2)
Professor M. J. Fischer February 10, 2009



Lecture Notes 9

22 Proving a Predicate is Hard-Core

We continue the proof of Lemma 3 of section 21. Recall that f is a strongly one-way and length preserving function, and that

        df
g(x,r)  =  (f (x ),r)
        df
b(x,r)  =  x ⋅r mod 2.
We showed in Lemma 2 that g is strongly one-way. We want to show that b is a hard core for g.

Assume to the contrary that b is not a hard core for g. Then there exists a p.p.t. algorithm G that predicts b with non-negligible advantage εG(n). Thus, there is a polynomial p() such that

Pr[G(f (U  ),X ) = b(U ,X  )] ≥ 1-+--1-
         n   n       n   n    2   p(n)
(1)

for all n in an infinite set N.

To complete the proof of Lemma 3, we will do two things:

We construct A now and defer the analysis of its success probability to the next lecture.

22.1 Using G to invert f

How can G help us to invert f? A priori it doesn’t seem to be much help to have a predictor for only a single predicate when our algorithm A is supposed to correctly output a length-n string in f-1(y). It’s not at all obvious how to use G, and several tricks are involved.

Using b to extract bits of x: Let ei be the unit vector with a 1-bit in position i and 0’s elsewhere. As before, we treat strings in {0,1}* as bit-vectors, so xi is the ith bit of x. It follows from the definition of Boolean dot product that b(x,ei) = x ei mod 2 = xi.

Fact

b(x,r) ⊕ b(x,r ⊕ ei) = xi.
(2)

This follows because

                   (  n      )    (  n      )
b(x,r) ⊕ b(x, s) =   ( ⊕  x  ⋅r)  ⊕ ( ⊕  x  ⋅s)
                         j   j          j   j
                     j=1             j=1
                   ⊕n
               =      (xj ⋅(rj ⊕ sj))
                   j=1
               =   b(x, r ⊕ s)                                 (3)
Taking s = r ei, equation 3 gives
                 i                i         i
b(x,r)⊕  b(x,r ⊕ e ) = b(x,r ⊕ (r ⊕ e)) = b(x,e) = xi
(4)

as desired.

First idea for A: For each i = 1,,n, use G to guess b(x,r) and b(x,r ei) for random r, then use equation 4 to guess xi, i.e., guess xi = G(y,r) G(y,r ei). Repeat this procedure to obtain polynomially many guesses of xi and choose the majority value. Return x = x1xn as the guess for f-1(y).

If G is correct about b on both calls, then xi is correct. As long the probability that both calls on G are correct is greater than 1
2 + -1-
poly, repetition will amplify this probability to be sufficiently close to 1 so that the probability of all n bits of x being correct is also close to 1. Unfortunately, this is only the case when G’s success probability is 3
4 +  1
poly, that is, its advantage is a little bit greater than 14. All we know about our G is that it satisfies inequality 1, so its advantage is only --1-
p(n).

Second idea for A: Same as first idea, except instead of using G to guess b(x,r), we just use a random bit ˆb(r). Thus, we guess that xi = ˆb (r) G(y,r ei). This of course doesn’t work since the probability of being correct is exactly 1
2 and we get no advantage. But if we somehow had an oracle that would give us the correct value for ˆ
b (r) = b(x,r), then this idea would indeed work since then the advantage at guessing xi would be the same as G’s advantage at guessing b.

Third idea for A: We generate a small (O(log n)-sized) set of random strings R0 and corresponding guesses for ˆb (r) for each r ∈ R0. From R0, we deterministically generate a polynomial set of strings R and corresponding values for ˆb (r), r ∈ R. By the way the construction will work, if ˆb (r) is correct for all r ∈ R0, then ˆb (r) is correct for all r ∈ R. Since the number of strings in R0 for which we require correct guesses is only O(log n), it follows that the probability of all O(log n) guesses being correct is at least po1ly since 2O(log n) = poly(n).

The strings in R are uniformly distributed over {0,1}n, but they are only pairwise independent. However, this turns out to be sufficient for the construction to work.

How to generate R0 and R: Let = log 2(2n p(n)2 + 1).

  1. Select s1,,s ∈{0,1}n uniformly and independently. Select σ1, ∈{0,1} uniformly and independently. Let R0 = {s1,s} and ˆb (si) = σi, i = 1,,ℓ.
  2. Let J be the family of non-empty subsets of {1,2,,ℓ}. For all sets J ∈J , compute
         ⊕                    ⊕
rJ =    sj    and    ρJ =     σj.
     j∈J                   j∈J

    Let R = {rJJ ∈J}, and ˆ
b (rJ) = ρJ for all J ∈J .

Note that r{i} = si and ρ{i} = σi, so indeed R R0.

Fact If b(x,sj) = σj for all j ∈{1,,ℓ}, then b(x,rJ) = ρJ for all J ∈J .

This follows because

             ⊕        ⊕            ⊕
b(x,rJ) = b(x,    sj) =    b(x,sj) =     σj = ρJ .
              j∈J       j∈J          j∈J
The first equality is from the definition of rJ, the second comes from repeated use of equation3, the third uses the assumption that σj is correct for all j ∈{1,,ℓ}, and the final equality is from the definition of ρJ.

22.2 The complete algorithm for A

Here is the complete algorithm for A. On input y, let n = |y| and = log 2(2n p(n)2 + 1).

  1. Uniformly and independently, select s1,,s ∈{0,1}n and σ1, ∈{0,1}.
  2. For all non-empty J ⊆{1,2,,ℓ}, compute
     J   ⊕   j             J  ⊕    j
r  =    s     and    ρ  =     σ .
     j∈J                   j∈J

  3. For all i ∈{1,,n}, for all non-empty J ⊆{1,,ℓ}, compute
    zJi = ρJ ⊕ G(y,rJ ⊕ ei).

  4. For all i ∈{1,,n}, let zi = majorityJ{ziJ}.
  5. Output z = z1zn.