YALE UNIVERSITY

DEPARTMENT OF COMPUTER SCIENCE

CPSC 461b: Foundations of Cryptography | Handout #5 | |

Yitong Yin | February 26, 2009 | |

Solutions to Problem Set 1

Originally due Thursday, February 19, 2009.

Problem 1 Chernoff bound – application 1

Consider a biased coin with probability p = 1∕3 of landing heads and probability 2∕3 of landing tails.
Suppose the coin is flipped some number n of times, and let X_{i} be a random variable denoting the i^{th} flip,
where X_{i} = 1 means heads, and X_{i} = 0 means tails. Use the Chernoff bound to determine a value for n
so that the probability that more than half of the coin flips come out heads is less that 0.001.

Let represent the event that ∑
_{i=1}^{n}X_{i} ≥, i.e. the event thet more than half of the coin flips come out
heads.

Note that E[X_{i}] = 1∕3. According to the Chernoff bound, the last probability measure is no greater
than 2exp(-).

Solving the inequality that 2exp(-) < 0.001, we get that n ≥⌈16ln(2000)⌉ = 122.

Problem 2 Chernoff bound – application 2

At the Yale-Harvard hockey game on February 6, Yale made 50 shots on the Harvard goal and scored 5 times, whereas Harvard made 15 shots on the Yale goal and scored only once. Assume that both teams had equally good goalies and that for each team, the probability is p = 0.1 of a shot scoring a goal. Clearly, the expected number of goals is np, where n is the number of shots. For Yale, the actual number of goals g exactly matched the expectation, but for Harvard with 15 shots, the actual number of goals (1) fell considerably short of the expected number (1.5).

The purpose of this problem is to assess how unusual it is that Harvard scored fewer than the expected number of goals given the number of shots on the goal and the assumed success probability of each shot. In the following, let n = 15, g = 1, p = 0.1, and q = Pr[# goals after n shots ≤ g]. We wish to find a “good” upper bound on q.

- Happy Hacker remembered about the Chernoff bound presented in lecture 1, so he decided to use it to bound q. What values should he use for the parameters ε, n, and p that appear in the formula on the right hand side of the bound? Using these values, compute the value b of the right hand side. What does b tell Happy about q that he didn’t already know?
- Clever Clara knew right away that it was a waste of time to compute the Chernoff bound in this case and didn’t bother. How did Clara know that?
- Stolid Sean didn’t see the need to think hard about this problem and instead just plunged in and computed q to 4 decimal places using standard probability theory. How could he do this, and what is the answer?

Let X_{i} {0,1} be the random variable that indicates whether the i^{th} shot of Harvard scores.

- Let ϵ = . It is easy to verify that when < p, the event that ∑
_{i=1}^{n}X_{i}≤ g implies that ≥ ϵ.When n = 15,g = 1 and p = 0.1, it holds that < p and ϵ = . Applying the Chernoff bound,

- The mean value is 0.1, which can be achieved precisely if scoring 1.5 (fractional) goals. The closest integral scores around 1.5 are 1 and 2. For both cases, the deviation is 1∕30, and for all possible number of goals, the deviation is at least 1∕30, thus the deviation bound can tell us nothing about X. Therefore, the Chernoff bound which relies solely on the deviation bound can provide little information.
- Note that ∑
_{i=1}^{n}X_{i}follows the binomial distribution. Directly compute the probability q.

Let M be a ppTM that accepts a language L and runs in time p(n) for some polynomial p(⋅). Let x be
an input string of length n and r a random choice string of length p(n) ≫ n. Let δ(x,r) = 1 if M(x,r),
the output of M with coin toss sequence r, gives the correct answer about x’s membership in L, and let
δ(x,r) = 0 otherwise. Suppose Pr[M(U_{n},U_{p(n)}) is correct] = 1 - 2∕2^{n}.

How large can we make the success probability of M(U_{n},r) by setting the second input of M to a
fixed string r? That is, what is the best lower bound on

that is implied by the given information, where max_{r} is taken over all binary strings of length
p(n)?

[Note: This problem generalizes a fact used in the proof of Theorem 4, section 11, lecture notes 3.]

The naïve lower bound is 1 - 2∕2^{n}, since the maximum can not be smaller than the average. We will show that
this is the best general lower bound by constructing a case where max_{r} Pr[M(U_{n},r) is correct] = 1 - 2∕2^{n}.

Denote δ(x,r) as a 2^{n} × 2^{p(n)} 0-1 matrix. Assign to each column of δ(x,r) exactly 2 zeroes. For every
r, it holds that Pr[δ(U_{n},r)] = (2^{n} - 2)∕2^{n} = 1 - 2∕2^{n}.

Therefore it satisfies the above constraint that that Pr[δ(U_{n},U_{p(n)})] = 1 - 2∕2^{n}. The maximum is
max_{r} Pr[δ(U_{n},r)] = 1 - 2∕2^{n}.

Problem 4 One-way functions and the -versus- question

[Textbook, Chapter 2, Exercise 3.]

The function f(π,ϕ,τ), where π represents the first n∕2 bits and (ϕ,τ) is the input of f_{sat}, is defined as

It is obvious that f is polynomial-time-computable because both branches in the definition are polynomial-time-computable and the condition can be verified in linear time. Claim 1 holds.

Suppose that A is such a polynomial-time algorithm that always inverts f. Let B be such an algorithm
whose input domain is the range of f_{sat} and we define that B(y) = A(0,y). It is trivial that B is also
polynomial-time. For every y in the range of f_{sat}, according to the definition of f, B(y) will output some
(π,ϕ,τ) such that (0,f_{sat}(ϕ,τ)) = (0,y), i.e. f_{sat}(ϕ,τ) = y, thus B is a polynomial-time
algorithm which always inverts f_{sat}, which is contra to the assumption that NP≠P . Claim 2
holds.

Let C be such a program that C(y) outputs all but the first bit of y if the first bit of y is 1, and outputs an arbitrary string if otherwise. It is easy to see that C is a polynomial-time algorithm that inverts f when the first bit of y is 1. The probability that C fails to invert f can be written as