New Algorithmic and Hardness Results in Learning, Error Correcting Codes and Constraint Satisfaction Problems
Abstract
Approximation algorithms are a natural way to deal with the intractability barrier that is
inherent in many naturally arising computational problems. However, it is often the case that
the task of solving the approximation version of the problem is as hard as the exact version
of the problem itself, which is the underlying philosophy driving the study of hardness of
approximation. While the last couple of decades has seen significant progress in terms of tight
inapproximability results for many important problems, there are still several fundamental
problems for which best known lower bounds in terms of the hardness of approximation is still
quite far off from their best known upper bounds achievable using efficient algorithms.
In this thesis, we investigate this phenomenon in the context of several fundamental com-
putational problems in Learning Theory, Error Correcting Codes and Constraint Satisfaction
Problems (CSPs), along the following themes:
(1) Hardness of Learning using Thresholds: Improper learning – i.e., learning a concept
class with a different and potentially easier to deal with hypothesis class – is often used
as natural relaxation when the exact (proper) learning problem is intractable. In this
thesis, we show the NP-Hardness of (improperly) learning natural concept classes such as
Halfspaces and Disjunctive Normal Forms using threshold functions. Our results are tight
with respect to efficiently achievable approximation factors for learning using arbitrary
functions of thresholds.
(2) Parameterized Complexity of Finding Sparse Solutions: Many natural problems
such as k-VectorSum and k-EvenSet can be modeled as finding a sparse solution to
a system of equations over finite fields. We study these problems in the framework of
parameterized complexity and give new (and often tight) lower bounds for exact and
approximation versions of these problems under various hypotheses. We also use these
results to establish new hardness results for learning sparse parities.
(3) Vertex Deletion CSPs: We study Vertex deletion CSPs, which are a variant of CSPs
where the objective is to delete the least number of vertices to recover a fully satisfi-
able sub-instance. In particular, we give new algorithmic and hardness results for the
StrongUniqueGames problem which is the vertex deletion variant of UniqueGames,
establishing a connection with small-set-vertex-expansion en route. We also give new algo-
rithmic results for vertex deletion CSPs such as OddCycleTransversal, Balanced-
VertexSeparator, Partial-3-Coloring in more tractable settings such as when the
underlying constraint graph has low threshold rank or is sampled semi-randomly.
(4) Testing Sparsity: We also investigate the following question: can one design efficient
algorithms for testing a property for which the search/optimization version is intractable?
We study this question in the form of testing sparsity, where one simply wishes to check
whether a matrix admits a sparse factorization. While the optimization of the problem
(Dictionary Learning) is intractable without distributional assumptions, we design an
efficient algorithm for the property testing formulation of this problem. We also extend
our results to the noise tolerant setting and settings where the basis of representation is
known.