Concatenated Codes
1, Introduction
Concatenated codes form a class of error-correcting codes that are derived by combining an inner code and an outer code. They were conceived in 1966 by Dave Forney as a solution to the problem of finding a code that has both exponentially decreasing error probability with increasing block length and polynomial-time decoding complexity. Concatenated codes became widely used in space communications in the 1970s. (From Wikipedia, the free encyclopedia)
2, Purpose
Originally, suppose we use length-n block source message for the transmission and the process can be shown in the following figure:
Actually, when we look into a coding scheme, the following aspects we should pay attention to:
- Computational Complexity
- Whether the rate converges to the channel capacity
- The error probability
Unfortunately, the length-n block encoding is not so great since:
- It leads a quadratic increase in the computational complexity with n
- The rate converges to the channel capacity only when the block length n is large enough
Concatenated code makes it possible to overcome the disadvantages of the length-n block encoding scheme by introducing the two stage steps.
3, First Stage
The first stage of the concatenated code is to code sub-sequences of the source's message into chunks of length log(n) each, which is shown in the following figure:
The computational complexity with the first stage coding will be much lower than the original length-n scheme. However, how about the error probability with the first stage coding?
4, Error Probability
We have already known that when we use random choose strategy in the n-length block code word assignment, the error probability is . With the first stage coding, the error probability for a single chunk with length log(n) is . Actually, this value will be very large in practical systems as is large, say - 0.001.
Denote the error probability of a single chunk to be , then the error probability of the original n-length block is , as there are chunks and we count an event as error if any chunks gets error.
We can see from the above induction that the error probability with the first stage coding is very large, could we improve this situation?
What if we make copies for each chunk in order to decrease the error probability of each chunk? Actually, this method could decrease the error probability of a single chunk, but the decrease is not that large, in addition, it would decrease the rate of the transmission as well. Therefore, this method may not be good enough.
5, The Second Stage
In the lecture notes talking about the source coding, we know that for a Bernoulli(p) random variable, the expected number of Head is , and the number of heads lies outside the bound is less than .
In our case, as there are chunks and each chunk has error probability , so that the expected number of chunks decoded correctly is , and the expected number of chunks decoded incorrectly is . Let's denote as .
Then Pr(number of chunks decoded incorrectly ) , as . where . Let denotes the event that no more than chunks decoded incorrectly, then Pr( ) .
Therefore, we can make use of Reed-Solomon code to code the chunks after the first stage to improve our scheme, which is the second stage of the concatenated code. The encode process is as following:
The following figure shows the decode process:
Comments (0)
You don't have permission to comment on this page.