In this article, a hybrid decoding algorithm for Reed–Muller codes is presented. Unlike the conventional algorithm, the presented algorithm ends recursive decomposition when and appeared. A simplified maximum-likelihood algorithm based on fast Hadamard transform is also exploited to decode the systematic code through its special structure. As a result, the presented hybrid decoding algorithm reduces the number of floating-point multiplications significantly as compared with the conventional algorithms. In addition, the new algorithm has better error performance than the conventional ones.
Error correcting codes (ECC) introduce controlled redundancy to correct errors that might have happened in the channel during signal transmission. The error-correction capability is achieved by mapping words from the k-dimensional information space to the n-dimensional code space, where in a general block coding scheme. This implies that a channel coder takes k-bit-long binary blocks as input and maps them to the corresponding n-bit-long binary blocks as output, which are then transmitted over the channel. In general, all possible information words are assumed to be equiprobable, and each of those words is uniquely mapped to one of available vectors in the code space. Thus, different codewords are separated from each other in some distance away.
A binary code C is linear if the sum of any two codewords in C is also a legal codeword in C, that is, it is closed under modulo-2 addition in . Thus, a linear code C is a linear subspace of . Let G be a matrix whose rows form a basis of C. This matrix is called the generator matrix of the code C. If G has k rows and n columns, then it defines a linear code of dimension k and length n. The encoding of an information word x into a codeword c is performed by
Thus, the matrix G can be considered as a compact representation of a linear code.
Binary Reed–Muller (RM) codes , , are a linear block code of length n, dimension k, and the minimum Hamming distance d. The code can be defined as the set of all mappings , where represents m-variate Boolean polynomial of degree being less than or equal to r.1,2 Decoding of the RM codes is commonly carried out by a recursive algorithm called the Plotkin algorithm.3–5 In this algorithm, the recursion ends when either or appeared. That is, the algorithm decomposes a received word to the final stage.
In this article, a hybrid decoding algorithm for RM codes is presented and analyzed. According to the stages of decomposition processes, the proposed hybrid algorithm exploits Green machine and Wagner decoding, as well as the traditional Plotkin recursive algorithm. Unlike the conventional algorithms, the recursive decomposition in the presented algorithm is ended when and appeared. That is, the presented scheme ends recursion one step earlier than the conventional ones so that computational complexity for decoding of RM codes can be reduced. In addition, a simplified maximum-likelihood (ML) decision based on fast Hadamard transform (FHT) is also exploited to decode the systematic code. Hence, the presented hybrid decoding algorithm possibly reduces computational complexity significantly. And computer simulation shows that the new decoding algorithm has improved error performance as compared with the conventional ones, as well.
The remainder of this article is organized as follows. In section “RM codes,” the RM codes are introduced shortly, and the conventional recursive decoding algorithm for the RM codes is also presented. In section “A hybrid recursive decoding for RM codes,” a hybrid recursive decoding algorithm is presented in detail. In section “Performance analysis,” numerical analysis on the decoding complexity of the proposed algorithm is provided. And error performance of the proposed algorithm compared with the conventional ones is also discussed. Finally, some concluding remarks are drawn in section “Conclusion.”
RM codes
Binary RM codes , , are a kind of linear block codes of length n, dimension k, and the minimum distance d such that4,5
where . Let , , then , , . Let a Boolean polynomial of degree at most r map vectors into . Then, the code can be defined as the set of all such mappings . It implies each m-variate Boolean polynomial of degree being less than or equal r defines a codeword as the vector of its values on .6 As an example, the first-order code of length 4 has eight codewords defined by polynomials of the form , where and . The truth tables on of the following monomials
form the basis for the code. It means that any codeword of can be represented as their linear combinations with binary coefficients. This, in turn, implies that their subsets upto degree r
also form the basis of the code.
Since when , the code contains only m-variate Boolean multilinear polynomials. Such a polynomial can be expressed as a sum of the functions in equation (2) with binary coefficients, and there exist such sums. Given a vector of length , we make an m-variate Boolean polynomial represented as a truth table. Then, there exist different binary strings of length . It implies that all the sums of monomials in equation (2) are distinct, as every row in the truth tables is a linearly independent vector in .
To explain the recursive decoding of the Plotkin algorithm,3 let and be and linear block codes, respectively. Define a new longer code C as6,7
Then, the code C is also a linear code.8 Indeed, the length and dimension follow from the generator matrix
where and are the generator matrices of and , respectively. An example of the conventional generator matrix of , which has a repetitive structure given in equation (6), is illustrated in Figure 1, where ∘ represents a binary 0 and • represents a binary 1.
The conventional generator matrix of .
For the codes u and v, if
is satisfied, the minimum distance equals . Here, denotes the weight of a code x.
RM codes can be recursively defined using the Plotkin structure. Let f be an m-variate polynomial of degree at most r. As f belongs to , it can be decomposed as
where the degree of and that of are and , respectively. and do not depend on the coordinate any more. This implies that is a codeword of and belongs to . Note that on a subcube and on the second subcube, that is, when .5 Thus, we can partition the initial hypercube into two subcubes that differ only the value of the coordinate . Since the function does not depend on , it generates identical vectors on both of these subcubes, which correspond to the u components in the Plotkin construction algorithm.
Codewords and can be decomposed further to obtain successively all codes for and . For example, after r steps of such decomposition, the repetition code is obtained, and the full code can be found after steps. By continuing this process on codes and , we obtain RM codes of length , and so on. Finally, we arrive at the end nodes, which consist of repetition codes for any and full space for . Such a decomposition process for RM codes of length 8 is schematically shown in Figure 2.5
Let be a block of k information bits that encodes a vector . By decomposing this vector into u and v, can be splitted into two information subblocks and that encode vectors u and v, respectively. The information subblocks are splitted further until the end nodes or . The information decomposition processes are shown in Figure 3 (the left of the decomposition path is referred to v path and the right is called u path in the following). It can be observed that only one information bit is assigned to the left-end (repetition) code , while bits are assigned to the right-end code . The bits will be encoded by the unit generator matrix. It is, therefore, considered that any codeword can be encoded from the information strings assigned to the end nodes or by combining codewords u and v in the construction repetitively.
A general decomposition of information path.
A hybrid recursive decoding for RM codes
Since the Plotkin algorithm had been proposed,3 a lot of researches have been carried out to develop or improve the recursive decoding algorithm.4–7,9 We propose a new recursive decoding algorithm for the RM code. Here, the recursive decomposition processes are ended one step earlier than the conventional one. Moreover, for a single parity-check code, we use its systematic form introduced in Fossorier et al.9 According to the stages of decomposition processes, the proposed hybrid algorithm exploits different decoding schemes including the Green machine, the Wagner decoding, and the traditional Plotkin recursive algorithm. An FHT is exploited to simplify the ML decoding algorithm of a dual-orthogonal code. We also present a simplified ML decoder for the systematic single parity-check code through its special structure.
In the conventional algorithm, the recursion is ended when the decomposition processes reach at either or . However, the new algorithm ends its recursion at the step and , that is, the decomposition processes are ended one step earlier than the conventional one. For example, recursion of will be ended when first appeared as shown in Figure 4. Thus, the new generator matrix for has the structure shown in Figure 5. As it can be observed, the new decomposition has less steps and far sparser generator matrix than the conventional one shown in Figure 1. The new method can provide a good compromise between the decoding complexity and performance.
An example of the new recursive decomposition path of .
The improved generator matrix of .
Let be the encoding algorithm of , where a and c are the input and the output of the encoder, respectively. The algorithm is executed as
Encoder:
In this encoding processes, ⊕ represents modulo-2 addition.
Assume that the coded bit sequence is modulated by binary phase-shift keying (BPSK) in the transmitter and is transmitted in an additive white Gaussian noise (AWGN) environment. We also assume that the transmitted signals are . After demodulation, the decoder calculates a posteriori probability of the received signals ,
Those a posteriori probabilities compose a vector as the input of the decoding algorithm. We use as decoding algorithm, where is an information block and is a decoded codeword.
For , decoder use for v path decoding and use for u path decoding, respectively. Then, merge the result of the two decoding paths. At the end of path, for , Green machine is employed for decoding.2,10 The Green machine produces the ML estimate. We need to transform y to get a log-likelihood ratio (LLR) l as the input of the Green machine
Step 1: Apply the FHT to get the ML estimate instead of the LLR to reduce the computational complexity.4
Step 2: At the end of u path, for , apply the Wagner decoding algorithm10 to y.
Step 3: Compute the decoded output components: , .
The algorithm can be implemented as
Decoder:
Green machine for the step 1.
Wagner decoder for the step 2.
Decoding u and v paths for the step 3.
Performance analysis
Computational complexity
In general, if the recursive algorithm stops decomposition at the end nodes and , the complexity tends to be minimum. Unlike the conventional algorithm, the proposed algorithm exploits a simplified ML decoder to decode a systematic single parity-check code through its special structure. For the analysis on the computational complexity, the number of floating-point operations needed in both algorithms is used as the main measure.
For , the number of floating-point operations required for step 3 of the hybrid decoding method is computed as
where, and represent the number of floating-point multiplications and that of additions, respectively.
Since the FHT is exploited, the decoder does not execute additions at the end of v path. floating-point additions are needed in the FHT and floating-point additions to find the maximum absolute value of f are required. Thus, the number of floating-point operations needed in the process to find can be expressed as
At the end of u path, , decoder makes hard decision and modulo-2 addition. If the result of modulo-2 addition is 0, floating-point operations are not carried out. Otherwise, floating-point additions must be executed to find the absolute minimum of y. Thus, the number of floating-point operations required are
From equations (11)–(13), the number of floating-point operations for decoding is given by
Equation (14) is satisfied when all checksums in equation (13) equal to 1. The number of floating-point multiplication and that of additions of with respect to the variation of r and m are presented in Tables 1 and 2, respectively.
The number of floating-point multiplications of .
r
m
2
3
4
5
6
7
8
9
10
1
0
0
0
0
0
0
0
0
0
2
0
24
72
168
360
744
1512
3048
3
0
72
240
600
1344
2856
5904
4
0
168
600
1584
3696
8088
5
0
360
1344
3606
8928
6
0
744
2856
8088
7
0
1512
5904
8
0
3048
9
0
The number of floating-point additions of .
r
m
2
3
4
5
6
7
8
9
10
1
3
31
79
191
447
1023
2303
5119
11,263
2
7
54
165
420
995
2274
5089
11,232
3
15
101
330
878
2129
4915
11,028
4
31
196
654
1788
4429
10,368
5
63
387
1297
3597
9050
6
127
770
2579
7200
7
255
1537
5140
8
511
3072
9
1023
It is difficult to make out a common relationship between the complexity and codeword length. Hence, the maximum operations of and have roughly been bounded as
For the v path of , the recursion ends at . The computational complexity can be expressed as
Since the complexity to arrive at the end of u path is the same as that of , the number of floating-point operations for decoding of for is given as
Hence, the number of operations needed is bounded by
For u path of , the recursion ends when appeared. Since the decoder only makes hard decision, neither floating-point multiplications nor floating-point additions is executed. Thus, the complexity is
For the case of , the number of floating-point operations for is
The maximum number of floating-point multiplications and that of additions for decoding of is shown in Figures 6 and 7, respectively, where NAV and NMV represent the number of additions and that of multiplications for v path decoding process, respectively. NAU and NMU are the number of additions and that of multiplications for u path decoding process, respectively. NA and NM are the number of additions and that of multiplications executed in the new decoding process, respectively.
The maximum number of floating-point multiplications for with respect to the values of r.
The maximum number of floating-point additions for with respect to the values of r.
It is shown that NAV requires the smallest computational complexity in addition. In the case of multiplication operations, however, the proposed method (NA) has the least complexity. As the multiplication generally affects the computational complexity much more than the addition, the proposed algorithm reduces computational burden significantly. And the maximum number of operations for decoding processes of u path is also bounded by
Error performance
When the ML decoding algorithm is exploited, it is known that step-by-step decompositions of end nodes do not have good performance. Unlike the conventional algorithm, the proposed algorithm stops recursive decompositions one step earlier than the conventional one. In addition, a systematic form is exploited. The systematic codes are known to have better error performance than nonsystematic codes.9 For , if the decomposition is continued, the systematic form can be lost.
The u path decoding depends on the results of v path decoding. If there are errors in v path decoding, u path decoding is not likely to obtain correct results. So, v path decoding is much more important than u path decoding in general. One-step earlier end of the recursive decompositions in v path tends to result in better performance than the one in the u path. To evaluate error performance of the proposed decoding algorithm, computer simulation has been carried out. The codedbits are modulated by the BPSK. The transmission environment is AWGN channel with zero-mean and power spectral density of . The results of computer simulation for three decoding algorithms applying to decode and are compared in Figure 8, where is the proposed decoding algorithm. and represent v path and u path decoding algorithm, respectively.
Bit-error rate of three algorithms.
It can be observed that the proposed decoding algorithm has the best error performance. As it can be expected, the v path decoding method has lower bit-error rate (BER) than the u path decoding method. For the code, to achieve , the proposed algorithm requires about 0.2 and 0.8 dB less signal-to-noise ratio (SNR) than the v path and u path algorithm, respectively. In the case of which is a kind of high-rate RM codes, performance differences among three algorithms are enlarged as compared with the ones in a low-rate code .
Conclusion
In this article, an improved recursive decoding algorithm for RM codes is proposed and analyzed. Unlike the conventional algorithms, the recursion processes are ended when and appeared. And a systematic decoding is also exploited for the processes of . Due to the one-step earlier end of decomposition processes, the proposed algorithm can reduce floating-point multiplication operations significantly. The numerical analysis verifies that the proposed algorithm has much reduced number of floating-point multiplications as compared with two existing algorithms. In addition, the proposed algorithm can provide improved BER as compared with the conventional v path and u path decoding algorithms, respectively.
Footnotes
Academic Editor: Soo Park
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research,authorship,and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research,authorship,and/or publication of this article: This work was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (no. 2015R1D1A1 A01060057). Zhenxing Chen was supported by the National Natural Science Foundation of China (NSFC) under grant no. 61401409.
References
1.
GallagerRG.Information theory and reliable communication. Hoboken, NJ: John Wiley & Sons, 1968.
2.
WickerSB.Error control systems for digital communication and storage. Upper Saddle River, NJ: Prentice Hall, 1995.
3.
PlotkinM.Binary codes with specified minimum distance. IRE T Inform Theory1960; 6(4): 445–450.
4.
DumerI.Soft-decision decoding of Reed-Muller codes: a simplified algorithm. IEEE T Inform Theory2006; 52(3): 954–963.
5.
DumerI.Recursive decoding and its performance for low-rate Reed-Muller codes. IEEE T Inform Theory2004; 50(5): 811–823.
6.
HemmatifF.Closest coset decoding of codes. IEEE J Sel Area Comm1989; 7(6): 982–988.
7.
KabatyanskiGA. On decoding of Reed-Muller codes in semicontinuous channels. In: Proceedings of 2nd international workshop on algebraic and combinatorial coding theory, Leningrad, 1990, pp.87–91. IMI - BAS & IITP - RAS, September.
8.
McWilliamsFJSloaneNJA. The theory of error-correcting codes. Amsterdam: North-Holland Publishing Company, 1981.
9.
FossorierMPCLinSRheeD.Bit-error probability for maximum-likelihood decoding of linear block codes and related soft-decision decoding methods. IEEE T Inform Theory1998; 44(7): 3083–3090.
10.
SilvermanRABalserM.Coding for constant-data-rate systems–part I: a new error-correcting code. Proc IRE1954; 42(9): 1428–1435.