Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-6 of 6
Simone Fiori
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2017) 29 (6): 1631–1666.
Published: 01 June 2017
FIGURES
| View All (8)
Abstract
View article
PDF
The estimation of covariance matrices is of prime importance to analyze the distribution of multivariate signals. In motor imagery–based brain-computer interfaces (MI-BCI), covariance matrices play a central role in the extraction of features from recorded electroencephalograms (EEGs); therefore, correctly estimating covariance is crucial for EEG classification. This letter discusses algorithms to average sample covariance matrices (SCMs) for the selection of the reference matrix in tangent space mapping (TSM)–based MI-BCI. Tangent space mapping is a powerful method of feature extraction and strongly depends on the selection of a reference covariance matrix. In general, the observed signals may include outliers; therefore, taking the geometric mean of SCMs as the reference matrix may not be the best choice. In order to deal with the effects of outliers, robust estimators have to be used. In particular, we discuss and test the use of geometric medians and trimmed averages (defined on the basis of several metrics) as robust estimators. The main idea behind trimmed averages is to eliminate data that exhibit the largest distance from the average covariance calculated on the basis of all available data. The results of the experiments show that while the geometric medians show little differences from conventional methods in terms of classification accuracy in the classification of electroencephalographic recordings, the trimmed averages show significant improvement for all subjects.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2008) 20 (4): 1091–1117.
Published: 01 April 2008
Abstract
View article
PDF
Learning on differential manifolds may involve the optimization of a function of many parameters. In this letter, we deal with Riemannian-gradient-based optimization on a Lie group, namely, the group of unitary unimodular matrices SU (3). In this special case, subalgebras of the associated Lie algebra su(3) may be individuated by computing pair-wise Gell-Mann matrices commutators. Subalgebras generate subgroups of a Lie group, as well as manifold foliation. We show that the Riemannian gradient may be projected over tangent structures to foliation, giving rise to foliation gradients. Exponentiations of foliation gradients may be computed in closed forms, which closely resemble Rodriguez forms for the special orthogonal group SO (3). We thus compare optimization by Riemannian gradient and foliation gradients.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (4): 779–838.
Published: 01 April 2005
Abstract
View article
PDF
The Hebbian paradigm is perhaps the best-known unsupervised learning theory in connectionism. It has inspired wide research activity in the artificial neural network field because it embodies some interesting properties such as locality and the capability of being applicable to the basic weight-and-sum structure of neuron models. The plain Hebbian principle, however, also presents some inherent theoretical limitations that make it impractical in most cases. Therefore, modifications of the basic Hebbian learning paradigm have been proposed over the past 20 years in order to design profitable signal and data processing algorithms. Such modifications led to the principal component analysis type class of learning rules along with their nonlinear extensions. The aim of this review is primarily to present part of the existing fragmented material in the field of principal component learning within a unified view and contextually to motivate and present extensions of previous works on Hebbian learning to complex-weighted linear neural networks. This work benefits from previous studies on linear signal decomposition by artificial neural networks, nonquadratic component optimization and reconstruction error definition, neural parameters adaptation by constrained optimization of learning criteria of complex-valued arguments, and orthonormality expression via the insertion of topological elements in the networks or by modifying the network learning criterion. In particular, the learning principles considered here and their analysis concern complex-valued principal/minor component/subspace linear/nonlinear rules for complex-weighted neural structures, both feedforward and laterally connected.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (12): 2909–2929.
Published: 01 December 2003
Abstract
View article
PDF
In recent work, we introduced nonlinear adaptive activation function (FAN) artificial neuron models, which learn their activation functions in an unsupervised way by information-theoretic adapting rules. We also applied networks of these neurons to some blind signal processing problems, such as independent component analysis and blind deconvolution. The aim of this letter is to study some fundamental aspects of FAN units' learning by investigating the properties of the associated learning differential equation systems.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (12): 2847–2855.
Published: 01 December 2002
Abstract
View article
PDF
This article investigates the behavior of a single-input, single-unit neuron model of the Bell-Sejnowski class, which learn through the maximum-entropy principle, in order to understand its probability density function matching ability.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (7): 1625–1647.
Published: 01 July 2001
Abstract
View article
PDF
Recently we introduced the concept of neural network learning on Stiefel-Grassman manifold for multilayer perceptron—like networks. Contributions of other authors have also appeared in the scientific literature about this topic. This article presents a general theory for it and illustrates how existing theories may be explained within the general framework proposed here.