dc.contributor.advisor | Barman, Siddharth | |
dc.contributor.author | Ghoshal, Suprovat | |
dc.date.accessioned | 2021-07-13T04:20:40Z | |
dc.date.available | 2021-07-13T04:20:40Z | |
dc.date.submitted | 2021 | |
dc.identifier.uri | https://etd.iisc.ac.in/handle/2005/5197 | |
dc.description.abstract | Approximation algorithms are a natural way to deal with the intractability barrier that is
inherent in many naturally arising computational problems. However, it is often the case that
the task of solving the approximation version of the problem is as hard as the exact version
of the problem itself, which is the underlying philosophy driving the study of hardness of
approximation. While the last couple of decades has seen significant progress in terms of tight
inapproximability results for many important problems, there are still several fundamental
problems for which best known lower bounds in terms of the hardness of approximation is still
quite far off from their best known upper bounds achievable using efficient algorithms.
In this thesis, we investigate this phenomenon in the context of several fundamental com-
putational problems in Learning Theory, Error Correcting Codes and Constraint Satisfaction
Problems (CSPs), along the following themes:
(1) Hardness of Learning using Thresholds: Improper learning – i.e., learning a concept
class with a different and potentially easier to deal with hypothesis class – is often used
as natural relaxation when the exact (proper) learning problem is intractable. In this
thesis, we show the NP-Hardness of (improperly) learning natural concept classes such as
Halfspaces and Disjunctive Normal Forms using threshold functions. Our results are tight
with respect to efficiently achievable approximation factors for learning using arbitrary
functions of thresholds.
(2) Parameterized Complexity of Finding Sparse Solutions: Many natural problems
such as k-VectorSum and k-EvenSet can be modeled as finding a sparse solution to
a system of equations over finite fields. We study these problems in the framework of
parameterized complexity and give new (and often tight) lower bounds for exact and
approximation versions of these problems under various hypotheses. We also use these
results to establish new hardness results for learning sparse parities.
(3) Vertex Deletion CSPs: We study Vertex deletion CSPs, which are a variant of CSPs
where the objective is to delete the least number of vertices to recover a fully satisfi-
able sub-instance. In particular, we give new algorithmic and hardness results for the
StrongUniqueGames problem which is the vertex deletion variant of UniqueGames,
establishing a connection with small-set-vertex-expansion en route. We also give new algo-
rithmic results for vertex deletion CSPs such as OddCycleTransversal, Balanced-
VertexSeparator, Partial-3-Coloring in more tractable settings such as when the
underlying constraint graph has low threshold rank or is sampled semi-randomly.
(4) Testing Sparsity: We also investigate the following question: can one design efficient
algorithms for testing a property for which the search/optimization version is intractable?
We study this question in the form of testing sparsity, where one simply wishes to check
whether a matrix admits a sparse factorization. While the optimization of the problem
(Dictionary Learning) is intractable without distributional assumptions, we design an
efficient algorithm for the property testing formulation of this problem. We also extend
our results to the noise tolerant setting and settings where the basis of representation is
known. | en_US |
dc.language.iso | en_US | en_US |
dc.rights | I grant Indian Institute of Science the right to archive and to make available my thesis or dissertation in whole or in part in all forms of media, now hereafter known. I retain all proprietary rights, such as patent rights. I also retain the right to use in future works (such as articles or books) all or part
of this thesis or dissertation | en_US |
dc.subject | Hardness of Approximation | en_US |
dc.subject | Approximation Algorithms | en_US |
dc.subject | Error Correcting Codes | en_US |
dc.subject.classification | Research Subject Categories::MATHEMATICS::Applied mathematics::Theoretical computer science | en_US |
dc.title | New Algorithmic and Hardness Results in Learning, Error Correcting Codes and Constraint Satisfaction Problems | en_US |
dc.type | Thesis | en_US |
dc.degree.name | PhD | en_US |
dc.degree.level | Doctoral | en_US |
dc.degree.grantor | Indian Institute of Science | en_US |
dc.degree.discipline | Engineering | en_US |