YALE UNIVERSITY
DEPARTMENT OF COMPUTER SCIENCE
CPSC 467b: Cryptography and Computer Security
 Week 12 (rev. 3)

 Professor M. J. Fischer April 12 & 14, 2005

Lecture Notes, Week 12

## 1  Coin-Flipping

Alice and Bob are in the process of getting divorced and are trying to decide who gets custody of their pet cat, Fluffy. They both want the cat, so they agree to decide by flipping a coin: heads Alice wins; tails Bob wins. Bob has already moved out and does not wish to be in the same room with Alice. The feeling is mutual, so Alice proposes that she flip the coin and telephone Bob with the result.
This proposal of course is not acceptable to Bob since he has no way of knowing whether Alice is telling the truth when she says that the coin landed heads. "Look Alice," he says, "to be fair, we both have to be involved in flipping the coin. We'll each flip a private coin and XOR our two coins together to determine who gets Fluffy. You should be happy with this arrangement since even if you don't trust me to flip fairly, your own fair coin is sufficient to ensure that the XOR is unbiased." This sounds reasonable to Alice, so she lets him go on to propose the protocol of Figure 1. In this protocol, 1 means "heads" and 0 means "tails".
 Alice Bob 1. Choose random bit bA ∈ {0,1} → bA. 2. ← bB Choose random bit bB ∈ {0,1}. 3. Coin outcome is b = bA ⊕bB. Coin outcome is b = bA ⊕bB.
Figure 1: Distributed coin flip protocol requiring honest parties.
After Alice considers Figure 1 for awhile, she objects. "This isn't fair. You get to see my coin flip before I see yours, so now you have complete control over the value of b." She suggests that she would be happy if the first two steps were reversed, so that Bob flipped his coin first, but Bob balks at that suggestion.
They then both remember last week's lecture and decide to use blobs to prevent either party from controlling the outcome. They agree on the protocol of Figure 2. At the completion of step 2, both Alice and Bob have each other's commitment (something they failed to achieve in the past, which is why they're in the middle of a divorce now), but neither know the other's private bit. They each learn each other's bit at the completion of the respective open protocols.
 Alice Bob 1. Choose random bit bA ∈ {0,1}. Choose random bit bB ∈ {0,1}. 2. commit(bA). ↔ commit(bB). 3. open(cA). ↔ open(cB). 4. Coin outcome is b = bA ⊕bB. Coin outcome is b = bA ⊕bB.
Figure 2: Distributed coin flip protocol using blobs.
While this protocol appears to be completely symmetric, it really isn't quite, for one of the parties completes step 3 before the other one does. Say Bob completes opening cB first. At that point, Alice knows bB and hence the coin outcome b. If it turns out that she lost, she might decide to stop the protocol and refuse to complete her part of step 3.
We haven't really addressed the question for any of these protocols about what happens if one party quits in the middle or one party detects the other party cheating. We have only been concerned until now with the possibility of undetected cheating. But in any real situation, one party might feel that he or she stands to gain by cheating, even if the cheating is detected. That in turn raises complicated questions as to what happens next. Does a third party Carol become involved? If so, can Bob prove to Carol that Alice cheated? What if Alice refuses to talk to Carol? It may be instructive to think about the recourse that Bob has in similar real-life situations and to consider the reasons why such situations rarely arise. For example, what happens if someone fails to follow the provisions of a contract or if someone ignores a summons to appear in court?

Protocols for coin flipping and for dealing a poker hand from a deck of cards can be based on the intuitive notion of locked boxes. This idea in turn can be implemented using commutative cryptosystems.

### 2.1  Coin-flipping using locked boxes

We discussed the coin-flipping problem in section 1 and presented a protocol based on bit-commitment. Here we present a coin-flipping protocol based on the idea of locked boxes.
• Imagine two sturdy boxes with hinged lids that can be locked with a padlock. Alice writes "heads" on a slip of paper and "tails" on another and places one of these slips in each box. She puts a padlock on each box for which she holds the only key. She then gives both locked boxes to Bob, in some random order.
• Bob cannot open the boxes and does not know which box contains "heads" and which contains "tails". He chooses one of the boxes and locks it with his own padlock, for which he has the only key. Now the box has two locks on it, one belonging to Alice and one to Bob. He gives the doubly-locked box back to Alice.
• Alice removes her lock and returns the box to Bob.
• Bob removes his lock, opens the box, and learns the outcome of the coin toss. He gives the slip of paper from the unlocked box back to Alice.
• Alice verifies that it is her slip of paper, with her handwriting on it, that she prepared at the beginning. She sends her key to Bob.
• Bob removes Alice's lock from the other box and verifies that she carried out her protocol correctly. (In particular, he checks that the slip of paper in the other box contains the other coin value.)

### 2.2  Commutative cryptosystems

Alice and Bob can carry out this protocol electronically using any commutative cryptosystem, that is, one in which EA(EB(m)) = EB(EA(m)) for all messages m. RSA is commutative for keys with a common modulus n, so we can use RSA in an unconventional way. Rather than making the encryption exponent public and keeping the factorization of n private, we turn things around. Alice and Bob jointly chose primes p and q, and both compute n = pq. Alice then chooses an RSA key pair A = ((eA, n), (dA, n)), which she can do since she knows the factorization of n. Similarly, Bob chooses an RSA key pair B = ((eB, n), (dB, n)) using the same n. Alice and Bob both keep their key pairs private (until the end of the protocol, when they reveal them to each other to verify that there was no cheating).
We note that this scheme may have completely different security properties from usual RSA. In RSA, there are three different secrets involved with the key: the factorization of n, the encryption exponent e, and the decryption exponent d. We have seen previously that knowing n and any two of these pieces of information allows the third to be reconstructed. Thus, knowing the factorization of n and e lets one compute d (easy). We also showed in section 1.3 of lecture notes week 6 how to factor n given both e and d.
The way RSA is usually used, only e is public, and it is believed to be hard to find the other quantities. Here we propose making the factorization of n public but keeping e and d private. It may indeed be hard to find e and d, even knowing the factorization of n, but if it is, that fact is not going to follow from the difficulty of factoring n. Of course, for security, we need more than just that it is hard to find e and d. We also need it to be hard to find m given c = me mod n. This is reminiscent of the discrete log problem, but of course n is not prime in this case.

### 2.3  Coin-flipping using commutative cryptosystems

Assuming RSA used in this new way is secure, we can implement the locked box protocol as shown in Figure 3. Here we assume that Alice and Bob initially know large primes p and q. In step (2), Alice chooses a random number r such that r < (n−1)/2. This ensures that m0 and m1 are both in Zn. Note that i and r can be efficiently recovered from mi since i is just the low-order bit of mi and r = (mii)/2.
 Alice Bob 1. Choose RSA key pair A with modulus n=pq. Choose RSA key pair B with modulus n=pq. 2. Choose random r ∈ Z(n−1)/2. Let mi = 2r+i, for i ∈ {0,1}. Let ci = EA(mi) for i ∈ {0,1}. Let C = {c0, c1}. → C Choose ca ∈ C. 3. ← cab Let cab = EB(ca). 4. Let cb = DA(cab). → cb 5. Let m = DB(cb). Let i = m mod 2. Let r = (m−i)/2. If i=0 outcome is "tails". If i=1 outcome is "heads". ← B 6. Let m = DB(cb). Check m ∈ {m0, m1}. If m=m0 outcome is "tails". If m=m1 outcome is "heads". → A 7. Let c′a = C − { ca }. Let m′ = DA( c′a ). Let i′ = m′ mod 2. Let r′ = (m′−i′)/2. Check that i′ ≠ i and r′ = r.
Figure 3: Distributed coin flip protocol using locked boxes.
To see that the protocol works when both Alice and Bob are honest, observe that in step 3, cab = EB(EA(mj)) for some j. Then in step 4, cb = DA(EB(EA(mj))) = EB(mj) by the commutativity of EA and EB. Hence, in step 5, m = mj is one of Alice's strings from step 2.
A dishonest Bob can control the outcome of the coin toss if he can find two keys B and B′ such that EB(ca) = EB(ca), where C = {ca, ca} is the set received from Alice in step 2. In this case, cab = EB(EA(mj)) = EB(EA(m1−j)) for some j. Then in step 4, cb = EB(mj) = EB(m1−j). Hence, mj = DB(cb) and m1−j = DB(cb), so Bob can obtain both of Alice's messages and then send B or B′ in step 5 to force the outcome to be as he pleases.

### 2.4  Card dealing using locked boxes

The same locked box paradigm can be used for dealing a 5-card poker hand from a deck of cards. Alice takes a deck of cards, places each card in a separate box, and locks each box with her lock. She arranges the boxes in random order and ships them off to Bob. Bob picks five boxes, locks each with his lock, and send them back. Alice removes her locks from those five boxes and returns them to Bob. Bob unlocks them and obtains the five cards of his poker hand. Further details are left to the reader.

## 3  Oblivious Transfer

In the locked box coin-flipping protocol, Alice has two messages m0 and m1. Bob gets one of them. Alice doesn't know which (until Bob tells her). Bob can't cheat to get both messages. Alice can't cheat to learn which message Bob got. The oblivious transfer problem abstracts these properties from particular applications such as coin flipping and card dealing,

### 3.1  Oblivious transfer of a secret

Alice has a secret s. In an oblivious transfer protocol, half of the time Bob learns s and half of the time he learns nothing. Afterwards, Alice doesn't know whether or not Bob learned s. Bob can do nothing to increase his chances of getting s, and Alice can do nothing to learn whether or not Bob got her secret. Rabin proposed an oblivious transfer protocol based on quadratic residuosity, shown in Figure 4, in the early 1980's.
 Alice Bob 1. Secret s. Choose primes p, q, and let n = pq. Generate RSA public key (e, n). Compute c = E(e,n)(s). → (e, n, c) 2. Choose random x ∈ Z∗n. ← a Compute a = x2 mod n. 3. Check a ∈ QRn. Choose y at random from among the 4 square roots of a mod n. → y 4. Check y2 ≡ a mod n. If y \not ≡ ±x mod n, use x, y to factor n and decrypt c to obtain s.
Figure 4: Rabin's oblivious transfer protocol.
Alice can can carry out step 3 since she knows the factorization of n and can find all of the square roots of a. However, she has no idea which x Bob used to generate a. Hence, with probability 1/2, y ≡ ±x mod n and with probability 1/2, y\not ≡ ±x mod n. If y \not ≡ ±x mod n, then the two factors of n are gcd(xy, n) and n/gcd(xy, n), so Bob factors n and decrypts c in step 4. However, if y ≡ ±x mod n, he learns nothing, and Alice's secret is as secure as RSA itself.
There is one potential problem with this protocol. A cheating Bob in step 2 might send a number a which he generated by some other means than squaring a random x. In this case, he learns something new no matter which square root Alice sends him in step 3. Perhaps that information, together with what he already learned in the course of generating a, is enough for him to factor n. We don't know of any method by which Bob could find a quadratic residue without also knowing one of its square roots. We certainly don't know of any method that would produce a quadratic residue a and some other information Ξ that, combined with y, would allow Bob to factor n. But we also cannot prove that no such method exists.
We can fix this protocol by inserting between steps 2 and 3 a zero knowledge proof that Bob knows a square root of a. This is essentially what the simplified Feige-Fiat-Shamir protocol does, but we have to reverse the roles of Alice and Bob. Now Bob is the one with the secret square root x. He wants to prove to Alice that he knows x, but he does not want Alice to get any information about x, since if she learns x, she could choose y=x and reduce his chances of learning s while still appear to be playing honestly. Again, details are left to the reader.

### 3.2  One-of-two oblivious transfer

In the one-of-two oblivious transfer, Alice has two secrets, s0 and s1. Bob always gets exactly one of the secrets. He gets each with probability 1/2, and Alice does not know which he got.
The locked box protocol is one way to implement one-of-two oblivious transfer. Another is based on a public key cryptosystem (such as RSA) and a symmetric cryptosystem (such as AES). This protocol, given in Figure 5, does not rely on the cryptosystems being commutative.
 Alice Bob 1. Choose two PKS key pairs (e0, d0) and (e1, d1). → e0, e1 2. Generate random key k for a symmetric cryptosystem ([^(E)], [^(D)]). Choose e ∈ {e0,e1}. ← c Compute c = Ee(k). 3. Compute ki = Ddi(c) for i=0, 1. Compute ci = [^(E)]ki(si) for i=0, 1. → c0, c1 4. Compute si = [^(D)]k(ci) for i=0, 1. One of the si is correct and the other is garbage.
Figure 5: One-of-two oblivious transfer using two PKS key pairs.
In step 2, Bob encrypts a randomly chosen key k for the symmetric cryptosystem using one of the encryption keys that Alice sent him in step 1. One of the ki Alice computes in step 3 is Bob's key k, but because k is random, she can't tell which it is. However, the key that is different from k cannot be computed by Bob since Bob doesn't have the corresponding decryption key di.
Note that this protocol depends on Bob being able to distinguish good secrets from random-looking garbage. To make it work for arbitrary secrets, Alice can add some known redundancy to the secrets that Bob can recognize. Another possibility would be for her to encrypt c·si in step 3 instead of just si. Since Bob knows c, he could then check the two possible decryptions to see which one began with c.

## 4  Pseudorandom Sequence Generation

We mentioned pseudorandom sequence generation in section 3.3 of lecture notes week 11. In a little more detail, a pseudorandom sequence generator G is a function from a domain of seeds S to a domain of strings X. We will generally find it convenient to assume that all of the seeds in S have the same length m and that all of the strings in X have the same length n, where m << n and n is polynomially related to m.
Intuitively, we want the strings G(s) to "look random". But what does that mean? Chaitin and Kolmogorov proposed ways of defining what it means for an individual sequence to be considered random. While philosophically very interesting, these notions are somewhat different than the statistical notions that most people mean by randomness.
We take a different tack. We assume that the seeds are chosen uniformly at random from S, that is, we consider a uniformly distributed random variable S over S. Then X = G(S) is a random variable over X. For xX,
 prob[X=x] = |{ s ∈ S | G(s) = x}| |S| .
That is, the probability is the fraction of seeds that give rise to x. Because m << n, |S| = 2m, and |X| = 2n, most strings in X are not in the range of G and hence have probability 0. If G happens to be one-to-one, then the remaining strings each have probability 1/2m.
We also consider the uniform random variable UX, where U = x with probability 1/2n for every xX. U is what we usually mean by a "truly random" variable on n-bit strings.
We will say that G is a cryptographically strong pseudorandom sequence generator if X and U are indistinguishable to all probabilistic polynomial Turing machines. We have already seen that the probability distributions of X and U are quite different. Nevertheless, they are indistinguishable if there is no feasible algorithm to determine whether random samples come from X or from U.
Before going further, let me describe some functions G for which G(S) is readily distinguished from U. Suppose every string x = G(s) has the form b1 b1 b2 b2 b3 b3 …, for example 0011111100001100110000.... An algorithm that guesses that x came from G(S) if x is of that form, and guesses that x came from U otherwise, will be right almost all of the time. True, it is possible to get a string like this from U, but it is extremely unlikely.
Formally speaking, a judge is a probabilistic Turing machine J that takes an n-bit string as input and produces a single bit b as output. Because it is probabilistic, it actually defines a random function from X to {0,1}. This means that for every input x, there is some probability px that the output is 1. If the input string is itself a random variable X, then the probability that the output is 1 is the weighted sum over all possible inputs that the judge outputs 1, where the weights are the probabilities of the corresponding inputs occurring. Thus, the output value is itself a random variable which we denote by J(X).
Now, we say that two random variables X and Y are ε-indistinguishable by judge J if
 |prob[J(X) = 1] − prob[J(Y) = 1]| < ε.
Intuitively, we say that G is cryptographically strong if G(S) and U are ε-indistinguishable for suitably small ε by all judges that do not run for too long. A careful mathematical treatment of the concept of indistinguishability must relate the length parameters m and n, the error parameter ε, and the allowed running time of the judges, all of which is beyond the scope of this course.
[Note: The topics covered by these lecture notes are presented in more detail and with greater mathematical rigor in handout 18.]

## 5  BBS Pseudorandom Sequence Generator

We present a cryptographically strong PRSG due to Blum, Blum, and Shub. The generator is based on the difficulty of determining, for a given aZn with Jacobi symbol jacobi(a, n) = 1, whether or not a is a quadratic residue, i.e., whether or not aQRn. Recall from lecture notes week 7, that this is the property upon on which the security of the Goldwasser-Micali probabilistic encryption system relies. The BBS generator further requires n to be a certain kind of composite number called a Blum integer. Blum primes and Blum integers were introduced in lecture notes week 11. We review their properties here.

### 5.1  Blum integers

A Blum prime is a prime number p such that p ≡ 3 mod 4. A Blum integer is a number n=pq, where p and q are Blum primes. Blum primes and Blum integers have the important property that every quadratic residue a has a square root y which is itself a quadratic residue. We call such a y a principal square root of a and denote it by √a mod n or simply by √a when it is clear that mod n is intended.
We need two other facts about Blum integers n. Let aQRn. Then jacobi(a, n) = jacobi(−a, n) = 1.
Let lsb(x) be the least significant bit of integer x. That is, lsb(x) = (x mod 2). Let xZn. Then lsb(x) ⊕lsb(−x) = 1.
Claim 1 actually holds for all n which are the product of two distinct odd primes. Claim 2 holds whenever n is odd.

### 5.2  BBS algorithm

The Blum-Blum-Shub generator BBS is defined by a Blum integer n = pq and an integer l. It maps strings in Zn to strings in {0,1}l. Given a seed s0Zn, we define a sequence s1, s2, s3, …, sl, where si = si−12 mod n for i = 1, …, l. The l-bit output sequence is b1, b2, b3, …, bl , where bi = lsb(si).

### 5.3  Bit-prediction

One important property of the uniform distribution U on bit-strings b1, …, bl is that the individual bits are statistically independent from each other. This means that the probability that a particular bit bi = 1 is unaffected by the values of the other bits in the sequence. This implies that any algorithm that attempts to predict bi, even knowing other bits of the sequence, will be correct only 1/2 of the time. We now translate this property of unpredictability to pseudorandom sequences.

#### 5.3.1  Next-bit prediction

One property we would like a pseudorandom sequence to have is that it be difficult to predict the next bit given the bits that came before.
We say that an algorithm A is an ε-next-bit predictor for bit i of a PRSG G if
 prob[ A(b1, …, bi−1) = bi ] ≥ 1 2 + ε
where (b1, …, bi) = Gi(S). To explain this notation, S is a uniformly distributed random variable ranging over the possible seeds for G. G(S) is a random variable (i.e., probability distribution) over the output strings of G, and Gi(S) is the corresponding probability distribution on the length-i prefixes of G(S).
Next-bit prediction is closely related to indistinguishability, introduced in section 4. We will show later that if if G(S) has a next-bit predictor for some bit i, then G(S) is distinguishable from the uniform distribution U on the same length strings, and conversely, if G(S) is distinguishable from U, then there is a next-bit predictor for some bit i of G(S). The precise definitions under which this theorem is true are subtle, for one must quantify both the amount of time the judge and next-bit predictor algorithms are permitted to run as well as how much better than chance the judgments or predictions must be in order to be considered a successful judge or next-bit predictor. We defer the mathematics for now and focus instead on the intuitive concepts that underly this theorem.

#### 5.3.2  Building a judge from a next-bit predictor

Suppose a PRSG G has an ε-next-bit predictor A for some bit i. Here's how to build a judge J that distinguishes G(S) from U. The judge J, given a sample string drawn from either G(S) or from U, runs algorithm A to guess bit bi from bits b1, …, bi−1. If the guess agrees with the real bi, then J outputs 1 (meaning that he guesses the sequence came from G(S)). Otherwise, J outputs 0. For sequences drawn from G(S), J will output 1 with the same probability that A successfully predicts bit bi, which is at least 1/2 + ε. For sequences drawn from U, the judge will output 1 with probability exactly 1/2. Hence, the judge distinguishes G(S) from U with advantage ε.
It follows that no cryptographically strong PRSG can have an ε-next-bit predictor. In other words, no algorithm that attempts to predict the next bit can have more than a "small" advantage ε over chance.

### 5.4  Previous-bit prediction

Previous-bit prediction, while perhaps less natural, is analogous to next-bit prediction. An ε-previous-bit predictor for bit i is an algorithm that, given bits bi+1, …, bl, correctly predicts bi with probability at least 1/2 + ε.
As with next-bit predictors, it is the case that if G(S) has a previous-bit predictor for some bit bj, then some judge distinguishes G(S) from U. Again, I am being vague with the exact conditions under which this is true. The somewhat surprising fact follows that G(S) has an ε-next-bit predictor for some bit i if and only if it has an ε′-previous-bit predictor for some bit j (where ε and ε′ are related but not necessarily equal).
To give some intuition into why such a fact might be true, we look at the special case of l = 2, that is, of 2-bit sequences. The probability distribution G(S) can be described by four probabilities
 pu,v = prob[ b1 = u ∧b2 = v ], where u, v ∈ {0, 1}.
Written in tabular form, we have
 b2
 b1
 0
 1
 0
 p0,0
 p0,1
 1
 p1,0
 p1,1
We describe an algorithm A(v) for predicting b1 given b2=v. A(v) predicts b1=0 if p0,v > p1,v, and it predicts b1=1 if p0,vp1,v. In other words, the algorithm chooses the value for b1 that is most likely given that b2 = v. Let a(v) be the value predicted by A(v).
If A is an ε-previous-bit predictor for b1, then A is an ε-next-bit predictor for either b1 or b2.
Assume A is an ε-previous-bit predictor for b1. Then A correctly predicts b1 given b2 with probability at least 1/2 + ε. We show that A is an ε-next-bit predictor for either b1 or b2.
We have two cases:
Case 1: a(0) = a(1). Then algorithm A does not depend on v, so A itself is also an ε-next-bit predictor for b1.
Case 2: a(0) ≠ a(1). The probability that A(v) correctly predicts b1 when b2 = v is given by the conditional probability
 prob[b1 = a(v) | b2 = v] = prob[b1 = a(v) ∧b2 = v] prob[b2 = v] = pa(v),v prob[b2 = v]
The overall probability that A(b2) is correct for b1 is the weighted average of the conditional probabilities for v=0 and v=1, weighted by the probability that b2 = v. Thus,
 prob[A(b2) is correct for b1]
 =
 ∑ u ∈ {0,1} prob[b1 = a(v) | b2 = v] ·prob[b2 = v]
 =
 ∑ u ∈ {0,1} pa(v),v
 =
 pa(0),0 + pa(1),1
Now, since a(0) ≠ a(1), the function a is one-to-one and onto. A simple case analysis shows that either a(v) = v for v ∈ {0,1}, or a(v) = ¬v for v ∈ {0,1}. That is, a is either the identity or the complement function. In either case, a is its own inverse. Hence, we may also use algorithm A(u) as a predictor for b2 given b1 = u. By a similar analysis to that used above, we get
 prob[A(b1) is correct for b2]
 =
 ∑ v ∈ {0,1} prob[b2 = a(u) | b1 = u] ·prob[b1 = u]
 =
 ∑ u ∈ {0,1} pu,a(u)
 =
 p0,a(0) + p1,a(1)
But
 pa(0),0 + pa(1),1 = p0,a(0) + p1,a(1)
since either a is the identity function or the complement function. Hence, A(b1) is correct for b2 with the same probability that A(b2) is correct for b1. Therefore, A is an ε-next-bit predictor for b2.
In both cases, we conclude that A is an ε-next-bit predictor for either b1 or b2.

### 5.5  Security of BBS generator

We now show that if BBS has a previous-bit predictor, then there is an algorithm for testing quadratic residues whose running time and accuracy are related to the running time and accuracy of the BBS predictor. Thus, if quadratic-residue-testing is "hard", then so is previous-bit prediction for BBS. See handout 18 for further results on the security of BBS.
Let A be an ε-previous-bit predictor for BBS(S). Then we can find an algorithm Q for testing whether a number x with Jacobi symbol 1 is a quadratic residue, and Q will be correct with probability at least 1/2 + ε.
Assume that A predicts bj given the k bits bj+1, …,bj+k. Then A also predicts b1 given b2, …, bk with the same accuracy. This follows from the fact that the mapping xx2 mod n is a permutation on QRn. Hence, s0 and sj−1 are identically distributed, so the k-bit prefixes of BBS(s0) and BBS(sj−1) are likewise identically distributed.
Algorithm Q(x), shown in Figure 6, tests whether or not a number x with Jacobi symbol 1 is a quadratic residue modulo n. It outputs 1 to mean xQRn and 0 to mean xQRn.
 To Q(x): 1. Let s2 = x2 mod n. 2. Let si = si−12 mod n, for i=3, …, k. 3. Let b1 = lsb(x). 4. Let bi = lsb(si), for i=2, …, k. 5. Let c = A(b2, …, bk). 6. If c = b1 then output 1; else output 0.
Figure 6: Algorithm Q for testing xQRn.
Since jacobi(x, n) = 1, then either x or −x is a quadratic residue. Let s0 be the principal square root of x or −x and consider the first k bits of BBS(s0). We have two cases:
If xQRn, the sequence of seeds is s0, x, s2, …, sk and the corresponding sequence of output bits is
 b1, b2, …, bk.
If −xQRn, the sequence of seeds is s0, −x, s2, …, xk and the corresponding sequence of output bits is
 ¬b1, b2, …, bk.
Hence, if the predicted first bit c is correct, then
 c=b1     iff    x ∈ QRn.
Since the predicted bit is correct with probability at least 1/2 +ε, algorithm Q is correctly with probability at least 1/2 +ε.

File translated from TEX by TTH, version 3.66.
On 2 May 2005, 12:57.