dc.contributor.advisor | Shevade, Shirish | |
dc.contributor.author | Bapat, Tanuja | |
dc.date.accessioned | 2013-06-20T09:54:39Z | |
dc.date.accessioned | 2018-07-31T04:38:22Z | |
dc.date.available | 2013-06-20T09:54:39Z | |
dc.date.available | 2018-07-31T04:38:22Z | |
dc.date.issued | 2013-06-20 | |
dc.date.submitted | 2011 | |
dc.identifier.uri | https://etd.iisc.ac.in/handle/2005/2065 | |
dc.identifier.abstract | http://etd.iisc.ac.in/static/etd/abstracts/2661/G24953-Abs.pdf | en_US |
dc.description.abstract | Many real-world problems like hand-written digit recognition or semantic scene classification are treated as multiclass or multi-label classification prob-lems. Solutions to these problems using support vector machines (SVMs) are well studied in literature. In this work, we focus on building sparse max-margin classifiers for multiclass and multi-label classification. Sparse representation of the resulting classifier is important both from efficient training and fast inference viewpoints. This is true especially when the training and test set sizes are large.Very few of the existing multiclass and multi-label classification algorithms have given importance to controlling the sparsity of the designed classifiers directly. Further, these algorithms were not found to be scalable. Motivated by this, we propose new formulations for sparse multiclass and multi-label classifier design and also give efficient algorithms to solve them. The formulation for sparse multi-label classification also incorporates the prior knowledge of label correlations. In both the cases, the classification model is designed using a common set of basis vectors across all the classes. These basis vectors are greedily added to an initially empty model, to approximate the target function. The sparsity of the classifier can be controlled by a user defined parameter, dmax which indicates the max-imum number of common basis vectors. The computational complexity of these algorithms for multiclass and multi-label classifier designisO(lk2d2 max),
Where l is the number of training set examples and k is the number of classes. The inference time for the proposed multiclass and multi-label classifiers is O(kdmax). Numerical experiments on various real-world benchmark datasets demonstrate that the proposed algorithms result in sparse classifiers that require lesser number of basis vectors than required by state-of-the-art algorithms, to attain the same generalization performance. Very small value of dmax results in significant reduction in inference time. Thus, the proposed algorithms provide useful alternatives to the existing algorithms for sparse multiclass and multi-label classifier design. | en_US |
dc.language.iso | en_US | en_US |
dc.relation.ispartofseries | G24953 | en_US |
dc.subject | Artificial Intelligence | en_US |
dc.subject | Machine Learning | en_US |
dc.subject | Multiclass Classification | en_US |
dc.subject | Multi-label Classification | en_US |
dc.subject | Sparse Max-Margin Multiclass Classifier Design | en_US |
dc.subject | Sparse Max-Margin Classifiers | en_US |
dc.subject | Sparse Max-Margin Multi-label Classifier Design | en_US |
dc.subject | Support Vector Machine (SVM) | en_US |
dc.subject | Sparse Classifiers | en_US |
dc.subject.classification | Computer Science | en_US |
dc.title | Sparse Multiclass And Multi-Label Classifier Design For Faster Inference | en_US |
dc.type | Thesis | en_US |
dc.degree.name | MSc Engg | en_US |
dc.degree.level | Masters | en_US |
dc.degree.discipline | Faculty of Engineering | en_US |