Summary: | This paper describes a classification strategy that can be regarded as amore general form of nearest-neighbor classification. It fuses the concepts ofnearestneighbor,linear discriminantandVantage-Pointtrees, yielding an efficient indexingdata structure and classification algorithm. In the learning phase, we define a set ofdisjoint subspaces of reduced complexity that can be separated by linear discrimi-nants, ending up with an ensemble of simple (weak) classifiers that work locally. Inclassification, the closest centroids to the query determine the set of classifiers con-sidered, which responses are weighted. The algorithm was experimentally validatedin datasets widely used in the field, attaining error rates that are favorably compara-ble to the state-of-the-art classification techniques. Lastly, the proposed solution hasa set of interesting properties for a broad range of applications: 1) it is determinis-tic; 2) it classifies in time approximately logarithmic with respect to the size of thelearning set, being far more efficient than nearest neighbor classification in terms ofcomputational cost; and 3) it keeps the generalization ability of simple models.
|