Show simple item record

dc.contributor.advisorVidhyasagar, M
dc.contributor.authorDeodhare, Dipti
dc.date.accessioned2025-10-15T11:24:47Z
dc.date.available2025-10-15T11:24:47Z
dc.date.submitted2001
dc.identifier.urihttps://etd.iisc.ac.in/handle/2005/7202
dc.description.abstractSome classification techniques require class labels to be specified. One such technique is Fisher's Linear Discriminant (FLD), which has been used to develop an algorithm that seeks bimodal projection directions. Although FLD is a supervised classification method, our approach to feature extraction is unsupervised. An artificial two-class problem is created by randomly assigning labels to the data points. FLD then attempts to align the classifier to separate these classes, thereby assisting in the detection of bimodal projection directions. This method achieves effective dimensionality reduction and naturally imposes a significant degree of weight sharing on the neural network performing the feature extraction. The objectives of accurate encoding and generating discriminative representations for pattern classification are not always consistent. The feature vectors obtained using bimodal projection directions demonstrate strong discriminative power, but are not designed for encoding. To address this, the feature map is extended to define an embedding in the feature space, ensuring a smooth inverse map exists from the feature space back to the input space. As a result, the transformation becomes lossless. Although the feature vector has a higher dimension than the input vector, the number of weight vectors required remains the same as in the unextended feature map. The NIST dataset for handwritten digits was used to evaluate the applicability and efficiency of the bimodal projection-based features for handling large, high-dimensional data. The proposed method constrains the number of free parameters (weights) in the network, improving the generalization performance of the resulting neural networks. This has been convincingly demonstrated by the algorithm's performance on the NIST dataset, which includes 344,307 training images and 58,646 test images, each of size 32 × 32. Despite limited computational resources, reasonable results were achieved. The embedding-based extension of the feature set provides a unified approach to both data classification and data representation. Although it results in feature vectors with dimensions larger than the input space, experiments using popular benchmarks like the Wine and Vowel datasets show that simple classifiers can still achieve good performance. Thus, the increased complexity due to higher dimensionality is offset by the simplicity and efficiency of the classifiers used.
dc.language.isoen_US
dc.relation.ispartofseriesT05131
dc.rightsI grant Indian Institute of Science the right to archive and to make available my thesis or dissertation in whole or in part in all forms of media, now hereafter known. I retain all proprietary rights, such as patent rights. I also retain the right to use in future works (such as articles or books) all or part of this thesis or dissertation
dc.subjectFisher's Linear Discriminant
dc.subjectBimodal Projection Directions
dc.subjectDimensionality Reduction
dc.titleBimodal Projections Based Features for High Dimensional Pattern Classification
dc.typeThesis
dc.degree.namePhD
dc.degree.levelDoctoral
dc.degree.grantorIndian Institute of Science
dc.degree.disciplineEngineering


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record