Fast and Robust Biomedical Image Reconstruction from Nonuniform Samples
Abstract
We consider the problem of reconstructing images from non-uniformly under-sampled spatial
point measurements with emphasis on robustness to noise. The computational methods that
deals with this problem are known as scattered data approximation (SDA) methods. Among
these, well-performing methods achieve the reconstruction by minimizing a cost that is a weighted
sum of data fidelity term measuring the accuracy of the fit at the measurement locations, and
a regularization term. The latter term incorporates certain smoothness, and is constructed by
summing the squared derivative values of a chosen order. The relative weight between these
two terms is known as the smoothing parameter. Prominent methods in this category are known
as thin-plate spline (TPS) and radial basis function (RBF) methods, and they require solving
large numerically ill-conditioned and/or dense linear system of equations. Subspace variational
method alleviates the numerical instability and the computational complexity associated with
the TPS and RBF methods. However, this approach involves solving large and sparse linear
system of equation requiring specialized numerical methods.
In the first part of the thesis, we propose a novel method for SDA that eliminates the need
for solving dense linear system of equations, and even the need for storing matrix representing
linear system. This is achieved by handling the reconstruction problem in two stages. In
the first stage, the given non-uniform data are transformed into a pair of regular grid images,
where, one image represents the measured samples and the other represents the sample density
map. In the second stage, the required image is computed as the minimizer of a cost that
is completely expressed in terms of regular grid discrete operations. It is expressed as a sum
of weighted quadratic data fitting term involving the transformed image pair, and and discrete
quadratic roughness functional. Computing the minimizer of this cost involves solving a wellconditioned
sparse linear system of equations, where system matrix is represented in terms of
filtering and array multiplications without the need for storing it explicitly. We demonstrate that
the proposed method, which is named as regular grid weighted smoothing (RGWS), has much
lower computational complexity than TPS and RBF methods, with only a little compromise in
the reconstruction quality.
RGWS uses quadratic regularization, which is known to yield over-smoothed images under
the presence of noise. We extend the RGWS method by incorporating non-quadratic regularization
which is constructed by applying a square root on the sum of squares of derivative values (known as `1 regularization). We propose a reconstruction method using this `1 regularization,
which we name as the `1-RGWS. We perform extensive set of reconstruction experiments with
various levels of under-sampling and noise and compare the performances of `1-RGWS and the
original RGWS, which we also call `2-RGWS.
When the sampling density becomes low, the performance of `1-RGWS degrade abruptly
and becomes worse than the `2-RGWS. This behavior is known as the phase transition in the
literature. We analyze this in a probabilistic viewpoint and infer that the prior probability model
corresponding to `1-regularization is based on the assumption that probability of a pixel location
taking certain derivative value is independent of the derivative values of its neighboring pixel locations,
which is clearly not true. We developed a probability model where error incurred by this
independence assumption is compensated by means of a multi-resolution based re-weighting
scheme. In this scheme, the desired reconstruction is formulated as a series of coarse-to-fine
multi-resolution reconstructions, and re-weighting of the prior probability for each resolution
level is derived from the reconstruction of previous resolution level. We demonstrate that the
new method, which we name the multiresolution based scattered data approximation (MSDA),
performs better than `1-RGWS and `2-RGWS under wide range of sampling densities, with
slightly increased computational complexity.
We then developed an extended method, where, instead of re-weighting the form of prior
probability model corresponding to `1 regularization, the probability model itself is determined
using maximum entropy principle. Specifically, at each resolution level in the multi-resolution
reconstruction, the required probability model is determined as the maximizer of entropy subject
to the information extracted from the lower resolution reconstruction as constraints. To
further enhance the performance, we use directional second derivative operators to define the
probability model. Moreover, to control the variance of this probability model, we also propose
to use a modified multiresolution scheme, where the image sizes increase by a fractional factor,
instead of doubling. We demonstrate that the new method, which we call the maximum entropy
regularized reconstruction (MERR), outperforms both MSDA and `1-RGWS for a wide range
of sampling densities and noise levels.