Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-4 of 4
Gustavo Deco
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2000) 12 (11): 2621–2653.
Published: 01 November 2000
Abstract
View article
PDF
Recent advances in the technology of multiunit recordings make it possible to test Hebb's hypothesis that neurons do not function in isolation but are organized in assemblies. This has created the need for statistical approaches to detecting the presence of spatiotemporal patterns of more than two neurons in neuron spike train data. We mention three possible measures for the presence of higher-order patterns of neural activation—coefficients of log-linear models, connected cumulants, and redundancies—and present arguments in favor of the coefficients of log-linear models. We present test statistics for detecting the presence of higher-order interactions in spike train data by parameterizing these interactions in terms of coefficients of log-linear models. We also present a Bayesian approach for inferring the existence or absence of interactions and estimating their strength. The two methods, the frequentist and the Bayesian one, are shown to be consistent in the sense that interactions that are detected by either method also tend to be detected by the other. A heuristic for the analysis of temporal patterns is also proposed. Finally, a Bayesian test is presented that establishes stochastic differences between recorded segments of data. The methods are applied to experimental data and synthetic data drawn from our statistical models. Our experimental data are drawn from multiunit recordings in the prefrontal cortex of behaving monkeys, the somatosensory cortex of anesthetized rats, and multiunit recordings in the visual cortex of behaving monkeys.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1999) 11 (4): 919–934.
Published: 15 May 1999
Abstract
View article
PDF
We introduce a learning paradigm for networks of integrate-and-fire spiking neurons that is based on an information-theoretic criterion. This criterion can be viewed as a first principle that demonstrates the experimentally observed fact that cortical neurons display synchronous firing for some stimuli and not for others. The principle can be regarded as the postulation of a nonparametric reconstruction method as optimization criteria for learning the required functional connectivity that justifies and explains synchronous firing for binding of features as a mechanism for spatiotemporal coding. This can be expressed in an information-theoretic way by maximizing the discrimination ability between different sensory inputs in minimal time.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (2): 260–269.
Published: 15 February 1996
Abstract
View article
PDF
According to Barlow (1989), feature extraction can be understood as finding a statistically independent representation of the probability distribution underlying the measured signals. The search for a statistically independent representation can be formulated by the criterion of minimal mutual information, which reduces to decorrelation in the case of gaussian distributions. If nongaussian distributions are to be considered, minimal mutual information is the appropriate generalization of decorrelation as used in linear Principal Component Analyses (PCA). We also generalize to nonlinear transformations by only demanding perfect transmission of information. This leads to a general class of nonlinear transformations, namely symplectic maps. Conservation of information allows us to consider only the statistics of single coordinates. The resulting factorial representation of the joint probability distribution gives a density estimation. We apply this concept to the real world problem of electrical motor fault detection treated as a novelty detection task.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1993) 5 (1): 105–114.
Published: 01 January 1993
Abstract
View article
PDF
In recent years localized receptive fields have been the subject of intensive research, due to their learning speed and efficient reconstruction of hypersurfaces. A very efficient implementation for such a network was proposed recently by Platt (1991). This resource-allocating network (RAN) allocates a new neuron whenever an unknown pattern is presented at its input layer. In this paper we introduce a new network architecture and learning paradigm. The aim of our approach is to incorporate "coarse coding" to the resource-allocating network. The network presented here provides for each input coordinate a separate layer, which consists of one-dimensional, locally tuned gaussian neurons. In the following layer multidimensional receptive fields are built by using pi-neurons. Linear neurons aggregate the outputs of the pi-neurons in order to approximate the required input-output mapping. The learning process follows the ideas of the resource-allocating network of Platt but due to the extended architecture of our network other improvements of the learning process had to be defined. Compared to the resource-allocating network a more compact network with comparable accuracy is obtained.