## On Learning k-Parities and the Complexity of k-Vector-SUM

dc.contributor.advisor | Bhattacharyya, Arnab | |

dc.contributor.author | Gadekar, Ameet | |

dc.date.accessioned | 2018-02-06T12:15:32Z | |

dc.date.accessioned | 2018-07-31T04:38:51Z | |

dc.date.available | 2018-02-06T12:15:32Z | |

dc.date.available | 2018-07-31T04:38:51Z | |

dc.date.issued | 2018-02-06 | |

dc.date.submitted | 2016 | |

dc.identifier.uri | http://etd.iisc.ac.in/handle/2005/3060 | |

dc.identifier.abstract | http://etd.iisc.ac.in/static/etd/abstracts/3925/G27336-Abs.pdf | en_US |

dc.description.abstract | In this work, we study two problems: first is one of the central problem in learning theory of learning sparse parities and the other k-Vector-SUM is an extension of the not oriousk-SUM problem. We first consider the problem of learning k-parities in the on-line mistake-bound model: given a hidden vector ∈ {0,1}nwith|x|=kand a sequence of “questions” a ,a ,12··· ∈{0,1}n, where the algorithm must reply to each question with〈a ,xi〉(mod 2), what is the best trade off between the number of mistakes made by the algorithm and its time complexity? We improve the previous best result of Buhrman et. al. By an exp (k) factor in the timecomplexity. Next, we consider the problem of learning k-parities in the presence of classification noise of rate η∈(0,12). A polynomial time algorithm for this problem (whenη >0 andk=ω(1))is a longstanding challenge in learning theory. Grigorescu et al. Showed an algorithm running in time(no/2)1+4η2+o(1). Note that this algorithm inherently requires time(nk/2)even when the noise rateη is polynomially small. We observe that for sufficiently small noise rate, it ispossible to break the(nk/2)barrier. In particular, if for some function f(n) =ω(logn) andα∈[12,1),k=n/f(n) andη=o(f(n)−α/logn), then there is an algorithm for the problem with running time poly(n)·( )nk1−α·e−k/4.01.Moving on to the k-Vector-SUM problem, where given n vectors〈v ,v ,...,v12n〉over the vector space Fdq, a target vector tand an integer k>1, determine whether there exists k vectors whose sum list, where sum is over the field Fq. We show a parameterized reduction fromk-Clique problem to k-Vector-SUM problem, thus showing the hardness of k-Vector-SUM. In parameterized complexity theory, our reduction shows that the k-Vector-SUM problem is hard for the class W[1], although, Downey and Fellows have shown the W[1]-hardness result for k-Vector-SUM using other techniques. In our next attempt, we try to show connections between k-Vector-SUM and k-LPN. First we prove that, a variant of k-Vector-SUM problem, called k-Noisy-SUM is at least as hard as the k-LPN problem. This implies that any improvements ink-Noisy-SUM would result into improved algorithms fork-LPN. In our next result, we show a reverse reduction from k-Vector-SUM to k-LPN with high noise rate. Providing lower bounds fork-LPN problem is an open challenge and many algorithms in cryptography have been developed assuming its1 2hardness. Our reduction shows that k-LPN with noise rate η=12−12·nk·2−n(k−1k)cannot be solved in time no(k)assuming Exponential Time Hypothesis and, thus providing lower bound for a weaker version of k-LPN | en_US |

dc.language.iso | en_US | en_US |

dc.relation.ispartofseries | G27336 | en_US |

dc.subject | Learning Parities | en_US |

dc.subject | k-LPN | en_US |

dc.subject | Intelligence in Algorithms | en_US |

dc.subject | Learning Theory | en_US |

dc.subject | k-Noisy-SUM | en_US |

dc.subject | Learning Parity | en_US |

dc.subject | Learning Sparse Parities | en_US |

dc.subject | Probably Approximately Correct (PAC) Model | en_US |

dc.subject | k-Vector-SUM | en_US |

dc.subject.classification | Computer Science | en_US |

dc.title | On Learning k-Parities and the Complexity of k-Vector-SUM | en_US |

dc.type | Thesis | en_US |

dc.degree.name | MSc Engg | en_US |

dc.degree.level | Masters | en_US |

dc.degree.discipline | Faculty of Engineering | en_US |