CRC is an error-detecting code is based on binary / polynomial “division”, and the sequence of redundant bits is appended to the end of a data unit so that the resulting data unit becomes exactly divisible (remainder=0) by a second predetermined binary number.
The CRC code requires definition of a so-called “generator polynomial” as the “divisor”.
And takes the message binary-shifted to left and extend the bit length with the divisor bit length minus one -1 (a.k.a. message binary is right padded with the bit length minus one of the CRC polynomial minus one -1) as the “dividend”, and the “remainder” becomes the result of CRC bits.
The polynomial coefficients are calculated according to the finite-field arithmetic as the binary long division. However, binary long division uses binary subtraction and “CRC long division” uses XOR operation. The XOR and addition operation can always be performed bitwise-parallel.
Back in the days when question like this is kinda time-consuming, here is a calculator which generate the step-by-step solution for all these problems.
This tools will display the answer in <table> form, which is Excel friendly, and <pre> as plain text for better compatibility in text-only environment.
In a CRC error-detecting scheme, choose the generator polynomial as x4 + x + 1 (10011). Encode the 11-bit sequence: 10010011011
Given a CRC generator x4 + x + 1 (10011), calculate the CRC code for the message 10010011011. Show the steps clearly and derive the solution.
Suppose the channel introduces an error pattern 100010000000000 (i.e., a flip from 1 to 0 or from 0 to 1 in position 1 and 5) from the data in Question 1. What is received? Can the error be detected?