YALE UNIVERSITY
DEPARTMENT OF COMPUTER SCIENCE

CPSC 461b: Foundations of Cryptography Notes 4 (rev. 2)
Professor M. J. Fischer January 22, 2009



Lecture Notes 4

13 A Careful Proof of BPPPpoly

We supply the missing piece to the proof of Theorem 4 in section 11 that BPPPpoly. Namely, if L ∈BPP, there exists a ppTM Mrecognizing L with error probability less than 2-n for all inputs x of length n.

Lemma 1 Let L ∈BPP. There is a probabilistic polynomial-time Turing machine Msuch that

  1. x ∈ L,Pr[M(x) = 1] > 1 - 2-|x|;
  2. x ∈ L,Pr[M(x) = 0] > 1 - 2-|x|.

Proof: By definition of BPP, there exists a ppTM M that recognizes L with error probability at most 13 for each input x. We can assume without loss of generality that M always outputs either 0 or 1.

Let r be an odd positive integer to be determined later. The ppTM Mon input x of length n runs M(x) a total of r times and gives as its output the majority value among the multiset of r outputs from M. We use the Chernoff bound (see lecture 1)

   [|∑        |   ]
    ||---Xi    ||         - 2εp(21r-p)-
P r |  r   - p| ≥ ε ≤ 2e       .
(1)

plus additional probabilistic reasoning to bound the error probability of M.

Define a random variable:

     { 1 if the ith run of M (x) gives the wrong answ er
Xi =   0 if the ith run of M (x) gives the right answer

The number of wrong answers in computing M(x) is Xi, so M(x) gives the wrong answer iff Xi > r∕2.

We now calculate an upper bound on the error probability for M(x). Let p be the probability that M(x) gives the wrong answer.1 By assumption, p 13.

      ′                            [∑        r]
P r[M (x) gives wrong answ er] =  P r     Xi > 2-                     (2)
                                   [( ∑  X     )   1    ]
                             =  P r   ----i - p  > --- p            (3)
                                   [( ∑ r      )   2 ]
                                      ---Xi        1-
                             ≤  P r     r   - p  > 6  .             (4)
                                   [||∑        ||    ]
                             ≤  P r ||---Xi - p|| > 1-.               (5)
                                       r          6
Equation 4 follows since 12 - p 16, and equation 5 follows since the absolute value brackets only increase the cardinality of the event.

Taking ε = 16 and using the fact that p(1 - p) < 29, equation 1 yields

   [|∑        |    ]
    ||---Xi    ||   1-     -r∕16
P r |  r   - p| > 6  ≤ 2e    .
(6)

It is easily verified that

2e-r∕16 < 2-n
(7)

for all r > 16(1 + n)log 2e.

We now fix r to be the least odd positive integer satisfying equation 7. Combining equations 27 gives the desired error bound:

P r[M ′(x) gives wrong answ er] < 2-n.
(8)

Moreover, Mruns in polynomial time since M does and r = O(n). This establishes the lemma. __

14 One-Way Functions

One-way functions are at the heart of cryptography. Intuitively, a function f : {0,1}*→{0,1}* is one-way if it is “easy” to compute and “hard” to “invert”. We next give precise technical definitions of these concepts.

14.1 Strongly one-way functions

Definition: The function f is strongly one-way if it satisfies the following conditions:

  1. f is computable in polynomial time.
  2. For all probabilistic polynomial-time algorithms A, all positive polynomials p(), and all sufficiently large n,
         ′       n     -1            1
P r[A (f(Un ),1  ) ∈ f (f(Un ))] < p(n).
    (9)

Intuitively, condition 2 says that any algorithm Athat attempts to invert f has success probability less than one over any positive polynomial for all but finitely many n. Moreover, this fact remains true even when A is given additional information about the inverse that it is trying to find.

The notation of equation 9 is very compressed and deserves some explanation. First of all, Un is the uniformly distributed random variable on binary strings of length n. Each occurrence of Un refers to the same length-n random string x, so each occurrence of f(Un) refers to same random string y = f(x). While x is uniformly distributed among all strings of length n, y is not so distributed. We are not assuming at this point that f is length preserving or that all strings x of length n map onto y’s of the same length. Moreover, f is not assumed to be one-to-one, so Pr[f(Un) = y] = m2-n, where m is the number of length n strings that f maps to y.

Because we don’t assume f is one-to-one, the notation f-1(y) refers to a set rather than to a particular string. By definition,

f-1(y) = {z | f(z) = y }.

This set is called the pre-image of y. Note that if y = f(x), then x ∈ f-1(y), but there may be other strings in the pre-iamge as well.

With this background, we see that the event within the square brackets of equation 9 is the event that Asucceeds in outputting some member of the pre-image f-1(y) for a randomly chosen string y as described above. Condition 2 then says that this probability that this event occurs is negligible (according to the definition of “negligible” in section 10). Note that Ais not required to output the string x that was used to generate y; any string in the pre-image will do.

Note also that Ais told the length of x as its second input. Even with this additional help, Ahas only negligible success probability. This is done to prevent functions from being considered one-way only because their inverses are too long to output in a polynomial amount of time, e.g., the function g(x) = , where is the length of x written as a binary number. For example, g(111010111) = 1001 since |111010111| = 9, and (1001)2 = 9.

14.2 Two obvious algorithms

Here are two obvious algorithms for inverting a function f.

  1. Random guess. A1(y,1n) outputs Un, a uniformly distributed random string of length n. This is correct with probability at least 2-n since there is a string x of length n such that f(x) = y, and A1 has one chance in 2n of finding it. It will be correct with probability strictly greater than 2-n if there is more than one string of length n that f maps to y. In particularly, for the function f that maps every string to a constant c, A1 will be correct with probability 1.
  2. Constant output. A2(y,1n) outputs the string x0 = 0n for every input of length n. This also is correct with probability at least 2-n but for a different reason than A1. Let y0 = f(x0). Then A2(y) is correct when y = y0. Since x is chosen uniformly at random from length-n strings, the probability that y = y0 is at least 2-n. As before, the probability will be greater if there is more than one string of length n that f maps to y0. In particular, for the constant function f that maps every string to y0, A2 will be correct with probability 1.

Of course, to be strongly one-way, every p.p.t. algorithm Ato invert f must have only a negligible probability of success, not just these two algorithms. But these examples provide a lower bound on what we can expect from a one-way function. In particular, no function can have an inversion success probability smaller than 2-n.

14.3 Weakly one-way functions

We now define a much weaker version of a one-way function. While uninteresting for cryptographic purposes, we will show that the existence of weakly one-way functions implies the existence of strongly one-way functions.

Definition: The function f is weakly one-way if it satisfies the following conditions:

  1. f is computable in polynomial time.
  2. There exists a positive polynomial p() such that for all probabilistic polynomial-time algorithms A and all sufficiently large n,
                                     1
P r[A ′(f(Un ),1n ) ⁄∈ f-1(f(Un ))] >----.
                                p(n)
    (10)

Note the subtle change in the order of quantifiers and in equation 10. This says that any algorithm A that attempts to invert f will take a noticeable amount of the time on inputs of length n for all sufficiently large n.

A function μ(n) is noticeable if μ(n) > 1∕p(n) for some fixed positive polynomial p() and all sufficiently large n. A function cannot be both noticeable and negligible, but it is possible for a function to be neither. For example, the function μ(n) = n mod 2 that equals 1 on all odd numbers and 0 on all even numbers is neither negligible nor noticeable. It is not negligible because it fails to be smaller than 1∕p(n) for infinitely many n, where p(n) = 2. It is not noticeable because it fails to be larger than 1∕q(n) > 0 for infinitely many n and all positive polynomials q(n).