dc.contributor.advisor | Agarwal, Shivani | |
dc.contributor.advisor | Veni Madhavan, C E | |
dc.contributor.author | Saneem Ahmed, C G | |
dc.date.accessioned | 2018-02-17T22:26:00Z | |
dc.date.accessioned | 2018-07-31T04:38:54Z | |
dc.date.available | 2018-02-17T22:26:00Z | |
dc.date.available | 2018-07-31T04:38:54Z | |
dc.date.issued | 2018-02-18 | |
dc.date.submitted | 2014 | |
dc.identifier.uri | https://etd.iisc.ac.in/handle/2005/3138 | |
dc.identifier.abstract | http://etd.iisc.ac.in/static/etd/abstracts/3992/G27114-Abs.pdf | en_US |
dc.description.abstract | The problem of feature selection is critical in several areas of machine learning and data analysis such as, for example, cancer classification using gene expression data, text categorization, etc. In this work, we consider feature selection for supervised learning problems, where one wishes to select a small set of features that facilitate learning a good prediction model in the reduced feature space. Our interest is primarily in filter methods that select features independently of the learning algorithm to be used and are generally faster to implement compared to other types of feature selection algorithms. Many common filter methods for feature selection make use of information-theoretic criteria such as those based on mutual information to guide their search process. However, even in simple binary classification problems, mutual information based methods do not always select the best set of features in terms of the Bayes error.
In this thesis, we develop a general approach for selecting a set of features that directly aims to minimize the Bayes error in the reduced feature space with respect to the loss or performance measure of interest. We show that the mutual information based criterion is a special case of our setting when the loss function of interest is the logarithmic loss for class probability estimation. We give a greedy forward algorithm for approximately optimizing this criterion and demonstrate its application to several supervised learning problems including binary classification (with 0-1 error, cost-sensitive error, and F-measure), binary class probability estimation (with logarithmic loss), bipartite ranking (with pairwise disagreement loss), and multiclass classification (with multiclass 0-1 error). Our experiments suggest that the proposed approach is competitive with several state-of-the art methods. | en_US |
dc.language.iso | en_US | en_US |
dc.relation.ispartofseries | G27114 | en_US |
dc.subject | Data Analysis | en_US |
dc.subject | Logarithms | en_US |
dc.subject | Supervised Learning | en_US |
dc.subject | Bayes Optimality | en_US |
dc.subject | Binary Classsification | en_US |
dc.subject | Bipartite Ranking | en_US |
dc.subject | Multiclass Classification | en_US |
dc.subject | Bayes Optimal Feature Selection | en_US |
dc.subject | Optimal Feature Selection | en_US |
dc.subject | Bayes Error | en_US |
dc.subject | Binary Class Probability Estimation | en_US |
dc.subject | Supervised Learning Problems | en_US |
dc.subject.classification | Computer Science | en_US |
dc.title | Bayes Optimal Feature Selection for Supervised Learning | en_US |
dc.type | Thesis | en_US |
dc.degree.name | MSc Engg | en_US |
dc.degree.level | Masters | en_US |
dc.degree.discipline | Faculty of Engineering | en_US |