On Plug-and-Play Regularization using Linear Denoisers
Abstract
The problem of inverting a given measurement model comes up in several computational imaging applications. For example, in CT and MRI, we are required to reconstruct a high-resolution image from incomplete noisy measurements, whereas in superresolution and deblurring, we try to infer the ground-truth from low-resolution or blurred images. Traditionally, this is done by minimizing $f + \phi$, where $f$ is a data-fidelity (or loss) function that is determined by the acquisition process, and $\phi$ is a regularization (or penalty) function that is based on a subjective prior on the target image. The solution is obtained numerically using iterative algorithms such as ISTA or ADMM.
While several forms of regularization and associated optimization methods have been proposed in the imaging literature of the last few decades, the use of denoisers (aka denoising priors) for image regularization is a relatively recent phenomenon. This has partly been triggered by advances in image denoising in the last 20 years, leading to the development of powerful image denoisers such as BM3D and DnCNN. In this thesis, we look at a recent protocol called Plug-and-Play (PnP) regularization, where image denoisers are deployed within iterative algorithms for image regularization. PnP consists of replacing the proximal map --- an analytical operator at the core of ISTA and ADMM --- associated with the regularizer $\phi$ with an image denoiser. This is motivated by the intuition that off-the-shelf denoisers such as BM3D and DnCNN offer better image priors than traditional hand-crafted regularizers such as total variation. While PnP does not use an explicit regularizer, it still makes use of the data-fidelity function $f$. However, since the replacement of the proximal map with a denoiser is ad-hoc, the optimization perspective is lost --- it is not clear if the PnP iterations can be interpreted as optimizing some objective function $f + \phi$. Remarkably, PnP reconstructions are of high quality and competitive with state-of-the-art methods. Following this, researchers have tried explaining why plugging a denoiser within an inversion algorithm should work in the first place, why it produces high-quality images, and whether the final reconstruction is optimal in some sense.
In this thesis, we try answering such questions, some of which have been the topic of active research in the imaging community in recent years. Specifically, we consider the following questions.
--> Fixed-point convergence: Under what conditions does the sequence of iterates generated by a PnP algorithm converge? Moreover, are these conditions met by existing real-world denoisers?
--> Optimality and objective convergence: Can we interpret PnP as an algorithm that minimizes $f + \phi$ for some appropriate $\phi$?
Moreover, does the algorithm converge to a solution of this objective function?
--> Exact and robust recovery: Under what conditions can we recover the ground-truth exactly via PnP? And is the reconstruction robust to noise in the measurements?
While early work on PnP has attempted to answer some of these questions, many of the underlying assumptions are either strong or unverifiable. This is essentially because denoisers such as BM3D and DnCNN are mathematically complex, nonlinear and difficult to characterize. A first step in understanding complex nonlinear phenomena is often to develop an understanding of some linear approximation. In this spirit, we focus our attention on denoisers that are linear. In fact, there exists a broad class of real-world denoisers that are linear and whose performance is quite decent; examples include kernel filters (e.g. NLM, bilateral filter) and their symmetrized counterparts. This class has a simple characterization that helps to keep the analysis tractable and the assumptions verifiable. Our main contributions lie in resolving the aforementioned questions for PnP algorithms where the plugged denoiser belongs to this class. We summarize them below.
--> We prove fixed-point convergence of the PnP version of ISTA under mild assumptions on the measurement model.
--> Based on the theory of proximal maps, we prove that a PnP algorithm in fact minimizes a convex objective function $f + \phi$, subject to some algorithmic modifications that arise from the algebraic properties of the denoiser. Notably, unlike previous results, our analysis applies to non-symmetric linear filters.
--> Under certain verifiable assumptions, we prove that a signal can be recovered exactly (resp. robustly) from clean (resp. noisy) measurements using PnP regularization. As a more profound application, in the spirit of classical compressed sensing, we are able to derive probabilistic guarantees on exact and robust recovery for the compressed sensing problem where the sensing matrix is random. An implication of our analysis is that the range of the linear denoiser plays the role of a signal prior and its dimension essentially controls the size of the set of recoverable signals. In particular, we are able to derive the sample complexity of compressed sensing as a function of distortion error and success rate.
We validate our theoretical findings numerically, discuss their implications and mention possible future research directions.