Summary |
In recent years, neural network decoding of polar codes, such as neural belief propagation (BP), has been introduced. These methods use deep learning to transform the factor graph into a neural network model by unfolding the decoding iterations, thereby enhancing the accuracy of traditional decoding processes. However, current prevalent methodologies for loss function calculation only take into account the output of the final layer. In our analysis, we found that when calculating the loss function using only the output from the last layer, the convergence speed of the decoder significantly decreases, especially when the number of unfolded iterations is higher. In this paper, we incorporate the output of all iterations into the loss function in the original neural BP structure. Additionally, we optimize the loss function by assigning different weights to losses at different iterations. As a result, the weighted loss function not only provides a lower Bit Error Rate (BER) compared to the original neural BP decoder at lower SNR, but also accelerates convergence speed. |