Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-1 of 1
Carlos S. Felgueiras
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2006) 18 (9): 2036–2061.
Published: 01 September 2006
Abstract
View article
PDF
Entropy-based cost functions are enjoying a growing attractiveness in unsupervised and supervised classification tasks. Better performances in terms both of error rate and speed of convergence have been reported. In this letter, we study the principle of error entropy minimization (EEM) from a theoretical point of view. We use Shannon's entropy and study univariate data splitting in two-class problems. In this setting, the error variable is a discrete random variable, leading to a not too complicated mathematical analysis of the error entropy. We start by showing that for uniformly distributed data, there is equivalence between the EEM split and the optimal classifier. In a more general setting, we prove the necessary conditions for this equivalence and show the existence of class configurations where the optimal classifier corresponds to maximum error entropy. The presented theoretical results provide practical guidelines that are illustrated with a set of experiments with both real and simulated data sets, where the effectiveness of EEM is compared with the usual mean square error minimization.