"Decimal CRC" 10/2/2017
Goal: send a sequence of decimal digits with error correction capable of detecting
a burst error of up to b decimal digits.
Sender:
(1) Choose an integer p between 10^b and 10^(b+1) that is not divisible
by 2 or 5, and communicate it to Receiver.
(2) Given message M, let r be the remainder of 10^(b+1) * M divided by p.
(3) Send encoded message: S = 10^(b+1) * M - r.
Receiver:
(1) On receipt of encoding R, test whether p divides R.
(2) If not, there has been an error. Ask Sender to re-transmit.
(3) If so, round R up to the next multiple of 10^(b+1) and extract M
by dividing by 10^(b+1).
Concretely, suppose b = 3, p = 1009 and M = 348722.
Sender computes r = 974, the remainder of 3487220000 divided by 1009, and sends
3487219026, the difference of 3487220000 and 974.
If there is no error, Receiver receives 3487219026, tests to see if it is divisible
by 1009 and discovers it is. Receiver rounds 3487219026 up to the next multiple of
10000, which is 3487220000 and extracts the original message by dividing by 10000,
to get 348722.
Suppose now 3 consecutive digits of the encoding are changed, so that Receiver
receives 3487645026. On dividing this by 1009, the remainder is 202, so the Receiver
requests a re-transmit.
Why should this work? Suppose Sender sends S and Receiver receives R. We know
10^(b+1) * M = q * p + r for some integer q (because r is the remainder on dividing
10^(b+1) * M by p.) Thus, S = q * p, a multiple of p.
Hence, if there is no error, R = S is a multiple of p. Since r < p and p is less
than 10^(b+1), S and 10^(b+1) * M differ by less than 10^(b+1), so rounding up to
the next multiple of 10^(b+1) will give the correct value of 10^(b+1) * M.
If there is a burst error of at most b digits then R - S = E * 10^k, where E is an
integer of absolute value less than 10^b and k is a nonnegative integer. Suppose
R is divisible by p. Because S is divisible by p, this means R - S is divisible
by p, that is, E * 10^k is divisible by p. But p is not divisible by 2 or 5, so
E must be divisible by p. However, the absolute value of E is less than 10^b,
and p is greater than 10^b. Thus, we must have E = 0, that is, no error, and R = S.
So if the burst error makes R different from S, then R will not be divisible by p,
and the error will be detected.
Why don't we implement this? Integer division is an expensive operation. Actual
CRC implementations depend on division of polynomials with coefficients in GF(2),
where the division can be implemented using shift and XOR.