Show simple item record

dc.contributor.advisorSeelamantula, Chandra Sekhar
dc.contributor.authorJawali, Dhruv
dc.date.accessioned2023-02-07T07:08:35Z
dc.date.available2023-02-07T07:08:35Z
dc.date.submitted2022
dc.identifier.urihttps://etd.iisc.ac.in/handle/2005/5999
dc.description.abstractThe problem of filter design is ubiquitous. Frequency selective filters are used in speech/audio processing, image analysis, convolutional neural networks for tasks such as denoising, deblurring/deconvolution, enhancement, compression, etc. While traditional filter design methods use a structured optimization formulation, the advent of deep learning techniques and associated tools and toolkits enables the learning of filters through data-driven optimization. In this thesis, we consider the filter design problem in a learning setting in both data-dependent and data-independent flavors. Data-dependent filters have properties governed by a downstream task, for instance, filters in a convolutional dictionary used for the task of image denoising. On the contrary, data-independent filters have constraints imposed on their frequency responses, such as lowpass, having diamond-shaped support, satisfying perfect reconstruction property, ability to generate wavelet functions, etc. The contributions of this thesis are four-fold: (i) the formulation of filter, filterbank, and wavelet design as regression problems, allowing them to be designed in a learning framework; (ii) the design of contourlet-based scattering networks for image classification; (iii) the design of a deep unfolded network using composite regularization techniques for solving inverse problems in image processing; and (iv) a multiscale dictionary learning algorithm that learns one or more multiscale generator kernels to parsimoniously explain certain neural recordings. We begin by developing learning approaches for designing filters having data-independent specifications, for instance, filters with a specified frequency response, including an ideal filter. The problem of designing such filters is formulated as a regression problem, using a training set comprising cosine signals with frequencies sampled uniformly at random. The filters are optimized using the mean-squared error loss, and generalization bounds are provided. We demonstrate the applicability of our approach for filters such as lowpass, bandpass, and highpass in 1-D, and diamond, fan and checkerboard support filters in 2-D. We then show how the methodology extends easily for designing 1-D and 2-D cosine modulated filterbanks. Second, we consider the problems of 1-D filterbank and wavelet design through learning. Wavelets have proven to be highly successful in several signal and image processing applications. Wavelet design has been an active field of research for over two decades, with the problem often being approached analytically. We draw a parallel between convolutional autoencoders and wavelet multiresolution approximation and show how the learning angle provides a coherent computational framework for solving the design problem. We design data-independent wavelets by interpreting the corresponding perfect reconstruction filterbanks as autoencoders (what we refer to as “filterbank autoencoders”), which precludes the need for customized datasets. In fact, we show that it is possible to design them efficiently using high-dimensional Gaussian vectors as training data. Generalization bounds show that a near-zero training loss implies that the learnt filters satisfy the perfect reconstruction property with a very high probability. We show that desirable properties of a wavelet such as orthogonality, compact support, smoothness, symmetry, and vanishing moments can all be incorporated into the proposed framework by means of architectural constraints or by introducing suitable regularization functionals to the MSE cost. Notably, our approach not only recovers the well-known Daubechies family of orthogonal wavelets and the Cohen-Daubechies-Feauveau (CDF) family of symmetric biorthogonal wavelets, which are used in JPEG-2000 compression, but also learns new wavelets outside these families. Third, we extend the ideas used for 1-D filterbank and wavelet learning to 2-D filterbank and wavelet design. A variety of efficient representations of natural images, such as wavelets and contourlets can be formulated as corresponding filterbank design problems. The design constraints on the continuous-domain wavelets have corresponding filter-domain manifestations. While most learning problems require specialized datasets, we employ 2-D random Gaussian matrices as training data and optimize filter coefficients considering the MSE loss. Design specifications such as orthogonality of the filterbank, perfect reconstruction property, symmetry, and vanishing moments are enforced through an appropriate parameterization of the convolutional units. We demonstrate several examples of learning biorthogonal and orthogonal filterbanks and wavelets having a specified number of vanishing moments, both point vanishing moments and directional vanishing moments, and symmetry constraints. Sparse recovery via composite regularization is an interesting approach proposed recently in the literature. One could design non-convex regularizers through a convex combination of sparsity-promoting penalties with known proximal operators. We develop a new algorithm, namely, convolutional proximal-averaged thresholding algorithm (C-PATA) for {\it composite-regularized} convolutional sparse coding (CR-CSC) based on the recently proposed idea of proximal averaging. We develop an autoencoder structure based on the deep-unfolding of C-PATA iterations into neural network layers, which results in the composite-regularized neural network (CoRNet) architecture. The convolutional learned iterative soft-thresholding algorithm becomes a special case of CoRNet. We demonstrate the efficacy of CoRNet considering applications to image denoising and inpainting, and compare the performance with state-of-the-art techniques such as BM3D, convolutional LISTA, and fast and flexible convolutional sparse coding (FFCSC). The data-independent filter design technique is employed to learn a contourlet transform used within a hybrid scattering network. Hybrid scattering networks are convolutional neural networks (CNNs) where the first few layers implement a fixed windowed scattering transform, while the rest of the network is learned. Scattering networks outperform state-of-the-art deep learning models for limited-data classification tasks although the performance gains are not much for large datasets. The 2-D Morlet filterbank used in Mallat's scattering network is replaced by a contourlet filterbank, which provides sparser representations and better frequency-domain directional separation. The contourlet transform comprises a multiresolution pyramidal filterbank cascaded with directional filters. We construct directional filters using diamond-shaped quincunx filterbanks and consider two pyramidal filter variants -- square-shaped, and filters with radially isotropic frequency domain support. The performance of all variants is evaluated for natural image classification tasks on CIFAR-10 and ImageNet datasets. We show that the radial contourlet variant achieves competitive performance compared with the Morlet scattering transform on large-dataset classification tasks while performing better for the limited-dataset scenario. We then switch over to the problem of learning data-dependent filters for sparse recovery by employing a combination of sparsity promoting regularizers. Sparse recovery via such composite regularization approaches is an interesting framework proposed recently in the literature. One could design non-convex regularizers through a convex combination of sparsity-promoting penalties with known proximal operators. We developed a new algorithm, namely, convolutional proximal-averaged thresholding algorithm (C-PATA) for composite-regularized convolutional sparse coding (CR-CSC) based on proximal averaging. We develop an autoencoder structure based on the deep-unfolding of C-PATA iterations into neural network layers, which results in the composite-regularized neural network (CoRNet) architecture. The convolutional learned iterative soft-thresholding algorithm becomes a special case of CoRNet. We demonstrate the efficacy of CoRNet considering applications to image denoising and inpainting and compare the performance with state-of-the-art techniques such as BM3D, convolutional LISTA, and fast and flexible convolutional sparse coding (FFCSC). Finally, we conclude by developing a data-dependent method to learn filters generating a multiscale convolutional dictionary. First, the multiscale convolutional dictionary learning (MCDL) algorithm is proposed to extract a representative waveform shape from a given dataset. The proposed algorithm is based on the popularly used convolutional dictionary learning formulation with a crucial difference -- we assume that the learned atoms are scaled versions of a single generator kernel. We evaluate kernel recovery for synthetic data under noiseless and noisy data conditions. A smoothness regularizer on the learned atom is used to aid better kernel recovery under noisy conditions. Kernel recovery is shown to be robust to model choices of scales and the assumed support size of the kernel without any restrictive assumptions. The proposed approach is applied to visualizing the typical patterns present within human electrocorticogram (ECoG) measurements. The validation is carried out using publicly available ECoG data recorded from a single Parkinson's disease patient. This thesis thus presents a cogent framework for learning filters, filterbanks, wavelets, and convolutional and multiscale dictionaries.en_US
dc.language.isoen_USen_US
dc.rightsI grant Indian Institute of Science the right to archive and to make available my thesis or dissertation in whole or in part in all forms of media, now hereafter known. I retain all proprietary rights, such as patent rights. I also retain the right to use in future works (such as articles or books) all or part of this thesis or dissertationen_US
dc.subjectMachine Learningen_US
dc.subjectDeep Learningen_US
dc.subjectSignal Processingen_US
dc.subjectWavelet Theoryen_US
dc.subjectDictionary Learningen_US
dc.subjectFilter designen_US
dc.subject.classificationResearch Subject Categories::MATHEMATICS::Other mathematicsen_US
dc.titleLearning Filters, Filterbanks, Wavelets and Multiscale Representationsen_US
dc.typeThesisen_US
dc.degree.namePhDen_US
dc.degree.levelDoctoralen_US
dc.degree.grantorIndian Institute of Scienceen_US
dc.degree.disciplineFaculty of Scienceen_US


Files in this item

This item appears in the following Collection(s)

Show simple item record