| dc.description.abstract | Wireless sensor networks are widely used nowadays in a variety of applications such as security, surveillance, industrial, and civilian applications. In wireless sensor networks, the individual nodes are spatially separated and sense correlated data. As the sensor nodes cannot communicate with each other, the correlation among the spatially distributed nodes cannot be exploited while encoding with traditional source?coding techniques. However, at the decoding side, the correlation can be exploited by joint decoding. The theory that helps in separate encoding and joint decoding was laid down way back in the 1970s by Slepian and Wolf [34] for lossless coding and by Wyner and Ziv [35] for lossy coding. Slepian–Wolf coding and Wyner–Ziv coding are collectively known as distributed source coding (DSC). Until 2003, there had been no constructive schemes to realize these coding methods. In 2003, Pradhan and Ramchandran [37] showed that distributed source coding can be realized using channel codes.
The encoding of channel codes is simpler while decoding is more complicated. This paradigm of source coding with channel codes has made the source encoder less complex and the decoder computationally intensive. This exchange of computational complexities finds applications in low?complexity multimedia data (such as video, images, etc.) communications in the wireless domain. Most of the research in DSC is concentrated on exploiting the correlation between two sources in sensor networks, or temporal redundancy in the case of distributed video coding (DVC). Source redundancy is hardly considered for DSC. However, multimedia data such as video frames and images contain a lot of spatial redundancy (or source redundancy) and may be modelled as nonuniform sources [94].
As already mentioned, DSC techniques use channel codes for compression. Further, it is known that a good channel code is also a good compression code for DSC. Presently, LDPC [1] codes are considered to be Shannon?limit error?correcting codes. Hence, in this thesis, LDPC codes are investigated for distributed source coding of nonuniform sources.
Initially, the thesis proposes a low?complexity post?processing algorithm to improve the bit?error?rate (BER) performance of LDPC codes. The proposed algorithm gives an appreciable coding gain compared to belief?propagation decoding of LDPC codes of code rate equal to or less than 1/2. The coding gains are modest to significant (in the range of 0.2 dB to 0.25 dB at a BER of 10??) in the case of optimized (for bipartite?graph conditioning) regular LDPC codes, while the gains are large (0.87 dB to 2.5 dB at a BER of 10??) in the case of unoptimized regular LDPC codes. Hence, this algorithm is useful for relaxing some stringent constraints on the graphical structure of LDPC code design and for developing hardware?friendly designs.
Besides performance improvement, the performance evaluation of channel codes plays an important role in DSC research. Since the minimum distance of linear block codes is one of the important parameters that indicates the error performance of error?correcting codes, the thesis investigates and proposes a novel algorithm to find the minimum distance of linear block codes with code rate greater than or equal to 1/2. In the proposed algorithm, the roles of information set and parity set are reversed to obtain virtually another information set. This method is 67.7 times faster than the minimum?distance algorithm implemented in the MAGMA Computational Algebra System for an (80, 45) linear block code. Even though this algorithm may not find the minimum distance of real?life (long?length) LDPC codes in practical time, it significantly improves performance for short?length linear block codes.
Minimum?distance computation requires the computation of generalized inverse in finite fields. However, literature on generalized inverse in finite fields is limited compared to that in real fields. Hence, a discussion on the generalized inverse in finite fields vis?à?vis real fields has been provided in Appendix A. Many differences and similarities of generalized inverse in finite and real fields have been highlighted, and an algorithmic computation of generalized inverse in finite fields using Gauss–Jordan elimination is also presented. Further, a memory?efficient architecture for computing and storing the generalized inverse is also proposed.
Multimedia data is highly redundant and may be modelled as a nonuniform source. With this in view, the thesis investigates the compression of nonuniform sources under the DSC paradigm. In literature, turbo codes have been considered for compression of nonuniform sources under DSC, but without much success. Further, arithmetic codes have also been considered for nonuniform sources under DSC, but the use of LDPC codes has not yet been explored. The possibility of employing LDPC codes for compression of nonuniform sources has been examined in this thesis.
Individual LDPC codes are analyzed with Monte?Carlo simulations under the Slepian–Wolf framework, assuming that the side information available at the decoder is lossless (i.e., the corner points of the Slepian–Wolf rate region). Simulation thresholds are determined for different source distributions. These threshold values indicate the maximum cross?over probability (between the source and side information) at which lossless decoding fails. The analysis reveals many interesting observations. It is found that LDPC codes are better suited for compression of nonuniform sources and closely achieve the Slepian–Wolf bound. Further, the entropy of X given side information Y, H(X|Y), turns out to be almost constant with respect to the source distribution (i.e., H(X|Y) is independent of the source distribution). This observation may be used to compute the simulation thresholds at different source distributions without performing Monte?Carlo simulations. Using this feature, a facsimile image compression method using Slepian–Wolf coding is later proposed. The Monte?Carlo simulation results show that highly biased sources can be compressed to 0.049 bits/sample away from the Slepian–Wolf bound for moderate block lengths (10,000 bits).
In LDPC coding theory, individual codes are seldom analyzed; rather, ensembles of LDPC codes are analyzed. One of the tools that analyzes ensemble?average performance of LDPC codes is density evolution. Hence, several capacity?approaching regular and irregular LDPC code ensembles are analyzed using discretized density evolution for the Slepian–Wolf coding problem. This analysis was carried out to take advantage of the vast, high?quality LDPC designs available for error?correcting applications. The threshold values show that nonuniform sources can be compressed to about 0.01 bits/sample away from the Slepian–Wolf bound even for highly decorrelated side information. This shows that existing LDPC codes are suitable and optimal for the Slepian–Wolf coding problem. A critical comparative analysis of LDPC codes and turbo codes for compression of nonuniform sources under Slepian–Wolf coding is also carried out. Many interesting and surprising facts about turbo codes in the syndrome?former and inverse?syndrome?former (SF–ISF) setup for Slepian–Wolf coding are highlighted.
The principles of distributed source coding (DSC) have been successfully applied to the compression of a single source (e.g., video) by swapping the computational complexities of the encoder and decoder [40]. The success of LDPC codes for compression of nonuniform sources naturally motivated the application of Slepian–Wolf coding to a single facsimile image. The correlation between image scan lines is exploited at the decoder under the DSC paradigm. DSC and context modelling are combined at the decoder. Side information is generated at the decoder using previously decoded image lines and context modelling. A novel weighted context?based iterative decoding of LDPC codes is proposed in addition to context?based initialization of log?likelihood ratio values in LDPC decoding. On average, an increase of 0.60 in compression ratio is obtained with context modelling. This contribution represents a significant initial step toward applying Slepian–Wolf coding to facsimile image compression to make it error?resilient. | |