YALE UNIVERSITY

DEPARTMENT OF COMPUTER SCIENCE

CPSC 461b: Foundations of Cryptography | Notes 6 (rev. 1) | |

Professor M. J. Fischer | January 29, 2009 | |

Lecture Notes 6

The goal of the next two lectures is to prove:

Proof: [Sketch] The backwards direction is trivial since every strongly one-way function is also weakly one-way.

For the forwards direction, we construct a strongly one-way function g from a weakly one-way
function f. The construction is straightforward — g just evaluates f on a large number t(n)
of arguments and concatenates together the results. That is, g(x) = (f(x_{1}),…,f(x_{t(n)})), where
x = x_{1},…,x_{t(n)} is viewed as consisting of t(n) length-n strings, and t(n) is a suitably-chosen
polynomial. Intuitively, g(x) is hard to invert if f(x_{i}) is hard to invert for at least one i. __

Proving that this intuition correct requires careful attention to detail and a substantial amount of probabilistic reasoning. The full proof will be presented in the next lecture.

Because of the complexity of the careful proof of theorem 1, it is useful to look first at a careful proof of a simpler theorem that illustrates many, but not all, of the ideas that go into the full proof.

We introduce some terminology to simplify discussions of the difficulty of inverting functions. We
write A′ inverts f(x) to denote the event A′(x,I^{|x|}) f^{-1}(f(x)), and we write A′ fails on f(x) to denote
its complement . With this shorthand, it follows that

Definition: Let ρ [0,1]. A function f is deterministic ρ-one-way if for all deterministic polynomial time algorithms A′ and all sufficiently large n,

Theorem 2 (Proposition 2.3.3 in text) Suppose f is deterministic 1/3-one-way. Define

Then g is deterministic 0.55-one-way.

(Note that 0.55 < 1 - (2∕3)^{2}.)

Proof: Suppose g is not 0.55-one-way. Then there exists a polynomial time algorithm B′ that
inverts g(U_{2n}) with success probability > 1 - 0.55 = 0.45 for ∞-many n. Consider such an n and
let N = 2^{n}.

Define the N ×N matrix μ by μ(x_{1},x_{2}) = 1 if B′ succeeds on input g(x_{1},x_{2}) = (f(x_{1}),f(x_{2})) and
μ(x_{1},x_{2}) = 0 otherwise, where |x_{1}| = |x_{2}| = n. Then

| (1) |

If B′ operated independently on each component of the pair (f(U_{n}),f(U_{n})), then μ would have a
simple structure. Namely, one could define

which would contradict inequality 1.

However, B′ does not necessarily have to operate in such a way. Perhaps B′ can somehow “solve” the simultaneous equations

Call a row (column) of μ good if at least 0.1% of the entries in it are 1 and bad otherwise.

Proof: [of claim] Suppose the fraction of good rows > 66.8%. Consider the algorithm I′ for inverting f on input y:

- Choose x
_{2}uniformly at random in {0,1}^{n}. - Run B′(y,f(x
_{2})) = (x′,x′′). - If f(′x) = y, halt with output x′, and announce “success”.

Now, algorithm A′(y) runs I′(y) repeatedly for 10,000 times. If any of the runs of I′(y) succeeds, then A′ succeeds and gives the same output as I′; otherwise, A′ fails.

For every good x_{1},

Hence,

so

Thus, the overall success probability of A′(f(U_{n})) is at least

To finish the proof of the theorem, we conclude from Claim 1 that there are at most (0.668N)^{2} 1’s in
the intersection of the good rows and good columns. The number of 1’s in each bad row and bad column is
less than 0.001N. Hence,

This contradicts inequality 1 and completes the proof. __