Show simple item record

dc.contributor.advisorBhattacharyya, Arnab
dc.contributor.authorGadekar, Ameet
dc.date.accessioned2018-02-06T12:15:32Z
dc.date.accessioned2018-07-31T04:38:51Z
dc.date.available2018-02-06T12:15:32Z
dc.date.available2018-07-31T04:38:51Z
dc.date.issued2018-02-06
dc.date.submitted2016
dc.identifier.urihttps://etd.iisc.ac.in/handle/2005/3060
dc.identifier.abstracthttp://etd.iisc.ac.in/static/etd/abstracts/3925/G27336-Abs.pdfen_US
dc.description.abstractIn this work, we study two problems: first is one of the central problem in learning theory of learning sparse parities and the other k-Vector-SUM is an extension of the not oriousk-SUM problem. We first consider the problem of learning k-parities in the on-line mistake-bound model: given a hidden vector ∈ {0,1}nwith|x|=kand a sequence of “questions” a ,a ,12··· ∈{0,1}n, where the algorithm must reply to each question with〈a ,xi〉(mod 2), what is the best trade off between the number of mistakes made by the algorithm and its time complexity? We improve the previous best result of Buhrman et. al. By an exp (k) factor in the timecomplexity. Next, we consider the problem of learning k-parities in the presence of classification noise of rate η∈(0,12). A polynomial time algorithm for this problem (whenη >0 andk=ω(1))is a longstanding challenge in learning theory. Grigorescu et al. Showed an algorithm running in time(no/2)1+4η2+o(1). Note that this algorithm inherently requires time(nk/2)even when the noise rateη is polynomially small. We observe that for sufficiently small noise rate, it ispossible to break the(nk/2)barrier. In particular, if for some function f(n) =ω(logn) andα∈[12,1),k=n/f(n) andη=o(f(n)−α/logn), then there is an algorithm for the problem with running time poly(n)·( )nk1−α·e−k/4.01.Moving on to the k-Vector-SUM problem, where given n vectors〈v ,v ,...,v12n〉over the vector space Fdq, a target vector tand an integer k>1, determine whether there exists k vectors whose sum list, where sum is over the field Fq. We show a parameterized reduction fromk-Clique problem to k-Vector-SUM problem, thus showing the hardness of k-Vector-SUM. In parameterized complexity theory, our reduction shows that the k-Vector-SUM problem is hard for the class W[1], although, Downey and Fellows have shown the W[1]-hardness result for k-Vector-SUM using other techniques. In our next attempt, we try to show connections between k-Vector-SUM and k-LPN. First we prove that, a variant of k-Vector-SUM problem, called k-Noisy-SUM is at least as hard as the k-LPN problem. This implies that any improvements ink-Noisy-SUM would result into improved algorithms fork-LPN. In our next result, we show a reverse reduction from k-Vector-SUM to k-LPN with high noise rate. Providing lower bounds fork-LPN problem is an open challenge and many algorithms in cryptography have been developed assuming its1 2hardness. Our reduction shows that k-LPN with noise rate η=12−12·nk·2−n(k−1k)cannot be solved in time no(k)assuming Exponential Time Hypothesis and, thus providing lower bound for a weaker version of k-LPNen_US
dc.language.isoen_USen_US
dc.relation.ispartofseriesG27336en_US
dc.subjectLearning Paritiesen_US
dc.subjectk-LPNen_US
dc.subjectIntelligence in Algorithmsen_US
dc.subjectLearning Theoryen_US
dc.subjectk-Noisy-SUMen_US
dc.subjectLearning Parityen_US
dc.subjectLearning Sparse Paritiesen_US
dc.subjectProbably Approximately Correct (PAC) Modelen_US
dc.subjectk-Vector-SUMen_US
dc.subject.classificationComputer Scienceen_US
dc.titleOn Learning k-Parities and the Complexity of k-Vector-SUMen_US
dc.typeThesisen_US
dc.degree.nameMSc Enggen_US
dc.degree.levelMastersen_US
dc.degree.disciplineFaculty of Engineeringen_US


Files in this item

This item appears in the following Collection(s)

Show simple item record