dc.description.abstract | This thesis presents results on error correcting codes for settings such as: (a) distributed storage, (b) private information retrieval, and (c) low-latency streaming. It also presents two new decoding algorithms for Polar codes under low memory setting.
Codes for Distributed Storage: Current distributed storage systems (DSS) store huge volumes of data running in to several exabytes (a billion GB) where recovery of a failed/unavailable units such as disk/node/rack is a common operation. In order to ensure reliable and efficient storage, erasure codes are increasingly preferred over replication. This is due to the smaller storage overheads that are possible with the erasure codes in comparison to the replication for the same reliability guarantees. When an [n,k] erasure code is used to store a file, the file is partitioned in to k chunks and (n-k) party chunks are computed from the k chunks and then the n chunks are stored across n nodes. It was observed in DSSs that the most common failure that is that of a single node failure among the n nodes. However, the classical erasure codes are not very efficient in performing {repair} of the single node failure. This lead to a new class of erasures codes called as Regenerating codes (RGC) that intend to minimize the repair bandwidth, the amount of data communicated during repair of the failed node. In an RGC, a data file composed of B symbols from a finite field is encoded in to n*alpha symbols and stored across n nodes with each node storing alpha symbols. RGCs needs to satisfy two properties 1. data collection and 2. node repair. The data collection property requires that by contacting any k out of n nodes and downloading alpha symbols from each, one should be able to recover the B symbols of the file. From this it is clear that B <= k*alpha. Node repair property requires that given any node failure, by downloading beta <= alpha symbols from any d out of remaining n-1 nodes, one should be able to reconstruct alpha symbols such that the data collection and node repair properties still continue to hold. A subclass of RGCs where the node repair property requires that the recovered node is exactly same as the failed node are called as exact repair regenerating codes (ER-RGCs) while the general case is referred to as functional repair regenerating codes (FR-RGCs). Given the parameter set (n, k, d, B), there exists a tradeoff between alpha, the storage per node and d*beta, the repair bandwidth. This tradeoff will be referred to as storage repair bandwidth (SRB) tradeoff. While SRB tradeoff is completely characterized for the FR case, it remains an open problem for the ER case. In this thesis we restrict to ER-RGCs. A subclass of RGCs called minimum storage regenerating codes (MSR) are of particular interest as they minimize storage overhead i.e., B = k*alpha and therefore are maximum distance separable (MDS) codes by data collection property. MSR codes are MDS codes, that minimize the repair bandwidth given by d*beta = d *alpha/(d-k+1). Some additional attributes that are desirable for MSR code are (1) optimal access property, where the beta symbols communicated during repair of a failed node are the only symbols accessed directly from the node, (2) small field size, q and (3) small sub-packetization level, alpha.
Contribution (a): We present the Coupled Layer (Clay) code construction for any (n, k, d) parameters with alpha = s^{n/s} where s=d-k+1 and show that it is an optimal access MSR code when d=n-1. We also show that for the case when d < n-1, Clay codes satisfy the MDS property and can achieve optimal repair bandwidth of d*alpha/d-k+1 under a mild restriction on the set of d helper nodes. These codes can be constructed over any finite field of size q > n-1 and have optimal sub-packetization level.
Contribution (b): We implemented the Clay code construction over a popular open source distributed storage system called Ceph . Contributions to Ceph involved enabling vector code support and providing Clay code as an erasure coding option. These changes are available in Ceph's nautilus release. This makes Clay codes the first instance of Regenerating codes being made available through an open-source framework. We evaluated the performance of Clay code in comparison to Reed Solomon (RS) codes over an Amazon EC2 cluster for both small size and large size workloads. It was shown that the theoretical guarantees of Clay codes are realized in the evaluations. We observed that the repair time of single node when (n=20, k=16, d=19) Clay code is used is 3X smaller compared to RS code with same parameters.
Contribution (c): Clay code constructions are MSR codes only for d=n-1 and have mild helper node restriction for the case when d < n-1. We therefore present MSR construction for the small d regime where d = k+1,k+2,k+3. This construction satisfies the optimal access property and has lowest possible sub-packetization level for an optimal access MSR code. Furthermore a field size O(n) suffices to construct these codes.
Contribution (d): We present an improved upper bound on the file size of ER-RGC code. This leads to improved outer bound for ER SRB tradeoff when d > k in certain regimes of (alpha, beta).
Codes for Private Information Retrieval: Private Information Retrieval (PIR) corresponds to retrieving a symbol from database comprising of B symbols without revealing any information about the index of the symbol being retrieved. Communication complexity of a PIR protocol indicates the number of bits sent through query and answers in order to retrieve a symbol privately from a database. It was shown by Chor et al. that in order to achieve PIR by using single server database the communication complexity has to be atleast B. It was also shown that by using two non communicating replicated server databases it is possible to lower the communication complexity to O(B^{1/3})$. There were several PIR protocols that followed that reduced the communication complexity for t >= 2 replicated servers. However all these schemes needed storage overhead >= 2.
In order to reduce the storage overhead, Fazeli, Vardy and Yaakobi introduced the notion of a PIR code and showed that this class of codes permit using PIR protocols developed for replicated-server in a coded setting, thereby achieving significant savings in the amount of storage needed.
Given an (n, k) t-server PIR code, the requirement is that any message symbol be recoverable from t disjoint recovery sets.
Contribution (e): We first show that a sub code of Reed Muller (RM) code generated by evaluations of monomials of degree r in m variables, results in a PIR code with dimension, k={m choose r} and t = 2^{m-r}. This sub-code is systematic and will be referred to as Shortened Reed Muller (SRM) code. We then provide a shortening algorithm for SRM code, that results in constructions for systematic PIR codes for any t = 2^l, 2^l - 1, where l is an positive integer and any k. These codes have lower storage overhead in comparison with known short block length codes in literature. We present a lower bound on the block length of a systematic PIR code and show that for t= 3,4, the codes constructed here are optimal with respect to this bound. The shortening algorithm we provide results in upper bound on Generalized Hamming Weight (GHW) hierarchy of the SRM code. The optimality of the shortening algorithm is shown by proving a matching lower bound on GHW hierarchy of the SRM code. This also results in characterization of the complete GHW hierarchy of the SRM code. The proofs presented for the derivations of lower bound on GHW hierarchy adapt the ideas from derivation of GHW for RM codes by Wei.
Codes for Low-Latency Communications: This part of the thesis addresses packet-level forward error correction (FEC) schemes that are aimed at recovering from packet drops under a strict decoding delay constraint. Specifically, the channel model considered here is a delay constrained sliding window (DCSW) channel having parameters a,b,t,w. In an (a, b, t, w)-DCSW channel, within any window of size w there are either at most a random erasures or a burst of consecutive erasures of size at most b. Packet with index i should be recoverable by accessing packets up until t+i excluding the erasures. It was shown by Badr et al. in that the rate R possible in a (a, b, t, w)-DCSW channel is upper bounded as:
R <= (t+1-a)/(t+1-a+b) = Ropt
Without loss of generality we can set w = t+1. An (a, b, t) streaming code is a packet level FEC scheme that can recover from all the erasure patterns permitted by the (a, b, t, t+1)-DCSW channel. Streaming codes that achieve Ropt were constructed by Krishnan et al. These constructions employ a diagonal embedding framework and have field size quadratic in delay parameter t. The same paper also contains 4 additional constructions with linear field size, but only for some restricted cases: (i) b = a + 1, (ii) (t + a + 1) >= 2b >= 4a, (iii) a | b |(t + 1 - a) and (iv) b = 2a - 1 and b | (t + 2 - a).
Contribution (f): The contributions of the current thesis in this area are the following.
1. It is shown how replacing the earlier diagonal embedding approach, by staggered-diagonal embedding (SDE), reduces the burden of erasure recovery placed on the underlying block code C, leading to simpler streaming-code constructions having smaller block length and reduced field size. Under SDE, the n code symbols of C are dispersed across a span of N packets and we will refer to N as the dispersion-span parameter.
2. The SDE approach yields (a,b,t) streaming codes for all parameters. These codes require a linear field size less than t+1, and have decoding complexity equivalent to that of decoding an MDS code of block length at most t+1. Furthermore, when either b | (t+1-a) or else b=a, the resultant codes can be shown to be rate-optimal with respect to Ropt.
3. The limits of the SDE approach while restricting the dispersion-span parameter N to be no larger than t+1 are explored. Limiting N leads to a further reduction in the requirements placed on scalar block code C, thereby simplifying code design. The maximum achievable rate Rmax under the restriction N <= t+1 is determined. It is shown that this rate is always achievable using an MDS code of redundancy a. Of significant practical interest, is the fact that in some instances, this maximum rate Rmax is also achievable by a binary streaming code.
Decoding Algorithms for Polar Codes: Polar codes were shown to be capacity achieving for binary discrete memory-less channels (B-DMC) by Arikan under successive cancellation decoding (SCD) algorithm. Though the SCD algorithm is computationally very efficient with complexity O(N log N) and memory O(N) for block length N, the frame error rate can further be improved. Decoding algorithms such as Successive Cancellation List Decoding (SCLD) , Successive Cancellation Stack Decoding (SCS) were proposed in the literature to improve the frame error rate with an increase in complexity and memory.
Contribution (g): We take a fresh look at SCD algorithm and try to improve its performance while retaining the memory requirement at O(N). We do so by proposing two decoding algorithms (1) Successive Cancellation with Back-Tracking (SC-BT) and (2) Successive Cancellation with Look-Ahead (SC-LA). The computation complexity of SC-BT varies with the Signal to Noise Ratio (SNR), whereas SC-LA has an approximate complexity of O(2^D/D N log N), where D is a parameter of the algorithm that is independent of N.
These algorithms compete with SCLD for smaller block-lengths but need large computation complexity to compete with SCLD at larger block-lengths. We extend the SC-LA algorithm to list decoding, where it uses a memory of O(LN) and has an approximate complexity of O( 2^D/D LN log N). The extended algorithm improves the frame error rate (FER) over SCLD that uses same list size (memory). | en_US |