Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-2 of 2
Yong Liu
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2004) 16 (2): 383–399.
Published: 01 February 2004
Abstract
View article
PDF
The one-bit-matching conjecture for independent component analysis (ICA) could be understood from different perspectives but is basically stated as “all the sources can be separated as long as there is a one-toone same-sign-correspondence between the kurtosis signs of all source probability density functions (pdf's) and the kurtosis signs of all model pdf's” (Xu, Cheung, & Amari, 1998a). This conjecture has been widely believed in the ICA community and implicitly supported by many ICA studies, such as the Extended Infomax (Lee, Girolami, & Sejnowski, 1999) and the soft switching algorithm (Welling & Weber, 2001). However, there is no mathematical proof to confirm the conjecture theoretically. In this article, only skewness and kurtosis are considered, and such a mathematical proof is given under the assumption that the skewness of the model densities vanishes. Moreover, empirical experiments are demonstrated on the robustness of the conjecture as the vanishing skewness assumption breaks. As a by-product, we also show that the kurtosis maximization criterion (Moreau & Macchi, 1996) is actually a special case of the minimum mutual information criterion for ICA.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1994) 6 (6): 1276–1288.
Published: 01 November 1994
Abstract
View article
PDF
Based on the influence function (Hampel et al . 1986), we analyze several forms of learning rules under the influence of abnormal inputs (outliers). Principal component analysis (PCA) learning rules (Oja 1982, 1989, 1992) are shown to be sensitive to the influence of outliers. Biologically motivated robust PCA learning rules are proposed. The variants of BCM learning (Bienenstock et al . 1982; Intrator 1990b) with linear neurons are shown to be subject to the influence of outliers, in contrast they are shown to be robust to outliers with sigmoidal neurons.