Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-1 of 1
Siddharth Pal
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (3): 856–884.
Published: 01 March 2007
Abstract
View article
PDF
We consider ensemble classification for the case where there is no common labeled training data for jointly designing the individual classifiers and the function that aggregates their decisions. This problem, which we call distributed ensemble classification , applies when individual classifiers operate (perhaps remotely) on different sensing modalities and when combining proprietary or legacy classifiers. The conventional wisdom in this case is to apply fixed rules of combination such as voting methods or rules for aggregating probabilities. Alternatively, we take a transductive approach, optimizing the combining rule for an objective function measured on the unlabeled batch of test data. We propose maximum likelihood (ML) objectives that are shown to yield well-known forms of probabilistic aggregation, albeit with iterative, expectation-maximization-based adjustment to account for mismatch between class priors used by individual classifiers and those reflected in the new data batch. These methods are extensions, for the ensemble case, of the work of Saerens, Latinne, and Decaestecker (2002). We also propose an information-theoretic method that generally outperforms the ML methods, better handles classifier redundancies, and addresses some scenarios where the ML methods are not applicable. This method also well handles the case of missing classes in the test batch. On UC Irvine benchmark data, all our methods give improvements in classification accuracy over the use of fixed rules when there is prior mismatch.