On Learning kParities and the Complexity of kVectorSUM
Abstract
In this work, we study two problems: first is one of the central problem in learning theory of learning sparse parities and the other kVectorSUM is an extension of the not oriouskSUM problem. We first consider the problem of learning kparities in the online mistakebound model: given a hidden vector ∈ {0,1}nwithx=kand a sequence of “questions” a ,a ,12··· ∈{0,1}n, where the algorithm must reply to each question with〈a ,xi〉(mod 2), what is the best trade off between the number of mistakes made by the algorithm and its time complexity? We improve the previous best result of Buhrman et. al. By an exp (k) factor in the timecomplexity. Next, we consider the problem of learning kparities in the presence of classification noise of rate η∈(0,12). A polynomial time algorithm for this problem (whenη >0 andk=ω(1))is a longstanding challenge in learning theory. Grigorescu et al. Showed an algorithm running in time(no/2)1+4η2+o(1). Note that this algorithm inherently requires time(nk/2)even when the noise rateη is polynomially small. We observe that for sufficiently small noise rate, it ispossible to break the(nk/2)barrier. In particular, if for some function f(n) =ω(logn) andα∈[12,1),k=n/f(n) andη=o(f(n)−α/logn), then there is an algorithm for the problem with running time poly(n)·( )nk1−α·e−k/4.01.Moving on to the kVectorSUM problem, where given n vectors〈v ,v ,...,v12n〉over the vector space Fdq, a target vector tand an integer k>1, determine whether there exists k vectors whose sum list, where sum is over the field Fq. We show a parameterized reduction fromkClique problem to kVectorSUM problem, thus showing the hardness of kVectorSUM. In parameterized complexity theory, our reduction shows that the kVectorSUM problem is hard for the class W[1], although, Downey and Fellows have shown the W[1]hardness result for kVectorSUM using other techniques. In our next attempt, we try to show connections between kVectorSUM and kLPN. First we prove that, a variant of kVectorSUM problem, called kNoisySUM is at least as hard as the kLPN problem. This implies that any improvements inkNoisySUM would result into improved algorithms forkLPN. In our next result, we show a reverse reduction from kVectorSUM to kLPN with high noise rate. Providing lower bounds forkLPN problem is an open challenge and many algorithms in cryptography have been developed assuming its1 2hardness. Our reduction shows that kLPN with noise rate η=12−12·nk·2−n(k−1k)cannot be solved in time no(k)assuming Exponential Time Hypothesis and, thus providing lower bound for a weaker version of kLPN
Collections
Related items
Showing items related by title, author, creator and subject.

Approximate Dynamic Programming and Reinforcement Learning  Algorithms, Analysis and an Application
Lakshminarayanan, Chandrashekar (20180813)Problems involving optimal sequential making in uncertain dynamic systems arise in domains such as engineering, science and economics. Such problems can often be cast in the framework of Markov Decision Process (MDP). ... 
Efficient Algorithms for Structured Output Learning
Balamurugan, P (20180508)Structured output learning is the machine learning task of building a classiﬁer to predict structured outputs. Structured outputs arise in several contexts in diverse applications like natural language processing, computer ... 
Model Extraction and Active Learning
Shukla, AdityaMachine learning models are increasingly being offered as a service by big companies such as Google, Microsoft and Amazon. They use Machine Learning as a Service (MLaaS) to expose these machine learning models to the ...