Jayant Sharma received his MS in Electronics and Communication Engineering (ECE). His research work was supervised by Dr. Lalitha Vadlamani. Here’s a summary of his research work on Designing neural network decoders and autoencoders for channel coding:
In communication systems, the reliable transmission of message bits is an important factor in the overall efficiency of the system. The correction of errors encountered in transmitted messages is done with the use of error correcting codes. These codes have traditionally been designed with concepts of abstract algebra. In the coding theory domain, the pursuit of good decoders and channel code designs has always been an interesting area of research.
Recently, because of the use of advanced GPU machines, neural networks have become a tool of choice for modern day researchers to do various tasks like function approximations and algorithm learning. There has been a surge in the research work trying to develop end to end codes and decoders for channel coding applications. Explorations into the modulation and demodulation designs for various channel settings were the first to come up in the applications of neural networks for channel coding. Following this came up the design of decoders for various kinds of existing codes in the coding theory. Then, explorations into the design of end to end communication systems and autoencoders came up in the literature.
We have come up with a design of a decoder for a coded modulation technique called as TCM (trellis coded modulation). The design is based on RNN and CNN variants of neural networks and shows that neural networks can perform the task of decoding coded modulation messages as effectively as the existing algorithms for codeword decoding. The trained model is tested for robustness to channel models not seen while training, and the proposed decoder is better in performance as compared to the existing decoder. An interpretation of the working of the proposed decoder has also been presented in this thesis, where we show the classification of symbols into decision regions with the use of CNN along with softmax activation function.
In another work, we have implemented an autoencoder based on neural networks for product code topology. The autoencoder takes ideas from turbo product codes for its design. The idea behind its operation is that the code is divided into its component row and column codes, the operation of the encoding is done by applying row and column codes one after another to generate the codeword. The decoder utilises the extrinsic information extraction equations to do the decoding of the received codeword. The received codeword is decoded in a row wise and column wise manner to get confidence values for every bit in the transmitted codeword; these are then used in the information extraction equations. The training of the autoencoder is done for the AWGN channel and it shows performance improvement over uncoded BPSK modulated messages. The encoder performance can be improved with the use of higher codeword lengths while training. With the use of individually trained row and column component codes in the autoencoder, the performance of the proposed autoencoder may get further improved.