Design of Placement Delivery Arrays for Shared Cache and Multi-antenna Coded Caching Problems
Abstract
Caching has been considered as a promising technique to handle the exponential growth of wireless data traffic. The caches distributed across the network are utilized to shift some of the peak-hour traffic to off-peak hours, and the broadcast nature of the network is efficiently used to deliver the contents. An information-theoretic study on a cacheaided network has revealed that coded delivery can provide further gains in addition to the caching benefits (“Fundamental limits of caching," IEEE Transactions on Information Theory, 2014). The seminal work considered a network called a dedicated cache network, where a server with a library of files is connected to a set of users, each possessing an individual cache. This thesis explores the coded caching approach in different settings and addresses some of the challenges that need to be circumvented in practice. We first consider a setting called a shared cache network, where a group of users is served by a server with the assistance of a smaller set of helper nodes that act as caches. Each user is associated with only one helper cache, and each helper cache can serve an arbitrary number of users. The fundamental limits and an achievable scheme are known for the shared cache network (“Fundamental limits of coded caching with multiple antennas, shared caches and uncoded prefetching," IEEE Transactions on Information Theory, 2020). The achievable scheme requires a subpacketization level, defined as the number of subfiles to which a file is divided, that grows exponentially with the number of caches, thus making it infeasible for practice. We design coded caching schemes for shared cache systems with lower subpacketization requirements than the existing scheme by utilizing a combinatorial structure called placement delivery arrays. Later, we extend the above results to a setting where multiple transmit antennas are present at the server. Next, we consider a setting similar to a shared cache network where users have an additional private cache. We present coded caching schemes for this network model under different constraints and characterize their performance. This network model emerged as a consequence of ensuring secrecy in the shared cache network. Under the secrecy constraint, no user is allowed to get any information about the file contents that it did not request from the cache contents or the transmissions. Therefore, to ensure secrecy in the shared cache setting, each user must have additional private storage. In the first work on secretive coded caching with shared caches, the private caches stored only the keys that are used to encode the file contents, and hence, the private cache size is always taken as unit file size. We also study this network model under the secretive setting and derive schemes for it using the placement delivery array structures. We then consider the multi-access coded caching network, where users access more than one helper cache subject to a specific, well-defined association between the users and the caches. We consider two types of multi-access coded caching networks based on userto-cache connectivity: one follows a combinatorial association, and the other follows a consecutive and cyclic wrap-around association. In the combinatorial multi-access coded caching setting, the number of caches is less than the number of users, whereas, in the latter, the number of caches and the number of users are identical. We study both types of networks under the multiple transmit antenna setting and present schemes for them. Having multiple transmit antennas is advantageous as it offers multiplexing gain alongside the coded caching gain and also helps to tackle the huge subpacketization requirements in some instances. Lastly, we shift our focus to MapReduce based coded distributed computing schemes. There are three phases involved in the MapReduce based distributed computing system: Map, Shuffle, and Reduce. The system consists of a set of distributed nodes assigned to compute arbitrary output functions depending on a file library. The computation of the output functions is decomposed into Map and Reduce functions, and the Shuffle phase, which involves the data exchange, links the two. We consider a wireless MapReduce distributed computing framework where the distributed nodes exchange information over a full-duplex wireless interference network to compute the assigned tasks. Under one-shot linear schemes, the known optimal scheme for this setting wants the number of files to be very large. We propose a wireless MapReduce distributed computing scheme that requires a smaller number of files and achieves the same performance as the existing one. The scheme is derived using the tools we developed for multi-antenna settings in cache-aided networks.