A nice application of sampling theory and the concept of sparsity is error correction codes for real and complex numbers [105]. In the next section, we shall see that similar methods can be employed for decoding block and convolutional codes.
Figure 12 represents convolutional encoders of rate 1/2of finite constraint length [105] and infinite precision per symbol. Figure 12a is a systematic convolutional encoder and resembles an oversampled signal discussed in Section 3 if the FIR filter acts as an ideal interpolating filter. Figure 12b is a non-systematic encoder used in the simulations to be discussed subsequently. In the case of additive impulsive noise, errors could be detected based on the side information that there are frequency gaps in the original oversampled signal (syndrome). In the following subsections, various algorithms for decoding along with simulation results are given for both block and convolutional codes. Some of these algorithms can be used in other applications such as spectral and channel estimation.
Information Theory And Coding By K Giridhar Pdf 1201
Due to the existence of additive noise, it is quite rational to look for a polynomial with degree less than m which also takes the complexity order into account. In MDL, the idea of how to consider the complexity order is borrowed from information theory: given a specific statistical distribution, we can find an optimum source coding scheme (e.g., Huffman coding) which attains the lowest average code length for the symbols. Furthermore, if p s is the distribution of the source s and q s is another distribution, we have [138]: 2ff7e9595c
Comments