Learning subspace methods using weighted and multi-subspace representations
Abstract
The learning subspace methods (LSMs) of classification are decision-theoretic pattern recognition methods where the primary model for a class is a linear subspace of the Euclidean pattern space. Classification is based on the orthogonal projections on these subspaces. Classification of a pattern is independent of its magnitude, and this property is desirable in certain applications. The decision surfaces are quadratic. The LSMs have a potential to extract the required features automatically. They are extremely fast at the time of classification; and their hardware realization is easy. The limitations of the LSMs include an ability to obtain only quadratic decision surfaces, and poor design scalability.
In this thesis, we have proposed new LSMs to overcome these limitations. The proposed methods use weighted and multi-subspace representations. The weighted representation associates different weights with different basis vectors in the computation of the orthogonal projection distances. The multi-subspace representation uses more than one subspace to represent each class. This representation obtains a piecewise approximation and helps to overcome the limitation of quadratic decision surfaces. By combining the weighted representation and Hebbian learning appropriately, scalability is improved. Scalability is also improved by an ability to obtain the required number of subspaces for each class and an ability to store partial computations.
Based on experimental results, we conclude that the learning subspace methods are good general-purpose classifiers on problems where classification is independent of magnitude. The design complexity is low. The classification speed is high. Their generalization is comparable to other classifiers.

