Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-9 of 9
Gordon Pipa
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2018) 30 (4): 945–986.
Published: 01 April 2018
FIGURES
| View All (7)
Abstract
View article
PDF
A neuronal population is a computational unit that receives a multivariate, time-varying input signal and creates a related multivariate output. These neural signals are modeled as stochastic processes that transmit information in real time, subject to stochastic noise. In a stationary environment, where the input signals can be characterized by constant statistical properties, the systematic relationship between its input and output processes determines the computation carried out by a population. When these statistical characteristics unexpectedly change, the population needs to adapt to its new environment if it is to maintain stable operation. Based on the general concept of homeostatic plasticity, we propose a simple compositional model of adaptive networks that achieve invariance with regard to undesired changes in the statistical properties of their input signals and maintain outputs with well-defined joint statistics. To achieve such invariance, the network model combines two functionally distinct types of plasticity. An abstract stochastic process neuron model implements a generalized form of intrinsic plasticity that adapts marginal statistics, relying only on mechanisms locally confined within each neuron and operating continuously in time, while a simple form of Hebbian synaptic plasticity operates on synaptic connections, thus shaping the interrelation between neurons as captured by a copula function. The combined effect of both mechanisms allows a neuron population to discover invariant representations of its inputs that remain stable under a wide range of transformations (e.g., shifting, scaling and (affine linear) mixing). The probabilistic model of homeostatic adaptation on a population level as presented here allows us to isolate and study the individual and the interaction dynamics of both mechanisms of plasticity and could guide the future search for computationally beneficial types of adaptation.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2017) 29 (9): 2491–2510.
Published: 01 September 2017
FIGURES
| View All (9)
Abstract
View article
PDF
Spike synchrony, which occurs in various cortical areas in response to specific perception, action, and memory tasks, has sparked a long-standing debate on the nature of temporal organization in cortex. One prominent view is that this type of synchrony facilitates the binding or grouping of separate stimulus components. We argue instead for a more general function: a measure of the prior probability of incoming stimuli, implemented by long-range, horizontal, intracortical connections. We show that networks of this kind—pulse-coupled excitatory spiking networks in a noisy environment—can provide a sufficient substrate for stimulus-dependent spike synchrony. This allows for a quick (few spikes) estimate of the match between inputs and the input history as encoded in the network structure. Given the ubiquity of small, strongly excitatory subnetworks in cortex, we thus propose that many experimental observations of spike synchrony can be viewed as signs of input patterns that resemble long-term experience—that is, of patterns with high prior probability.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2015) 27 (8): 1555–1608.
Published: 01 August 2015
FIGURES
| View All (37)
Abstract
View article
PDF
In neuroscience, data are typically generated from neural network activity. The resulting time series represent measurements from spatially distributed subsystems with complex interactions, weakly coupled to a high-dimensional global system. We present a statistical framework to estimate the direction of information flow and its delay in measurements from systems of this type. Informed by differential topology, gaussian process regression is employed to reconstruct measurements of putative driving systems from measurements of the driven systems. These reconstructions serve to estimate the delay of the interaction by means of an analytical criterion developed for this purpose. The model accounts for a range of possible sources of uncertainty, including temporally evolving intrinsic noise, while assuming complex nonlinear dependencies. Furthermore, we show that if information flow is delayed, this approach also allows for inference in strong coupling scenarios of systems exhibiting synchronization phenomena. The validity of the method is demonstrated with a variety of delay-coupled chaotic oscillators. In addition, we show that these results seamlessly transfer to local field potentials in cat visual cortex.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2015) 27 (6): 1159–1185.
Published: 01 June 2015
FIGURES
| View All (29)
Abstract
View article
PDF
Supplementing a differential equation with delays results in an infinite-dimensional dynamical system. This property provides the basis for a reservoir computing architecture, where the recurrent neural network is replaced by a single nonlinear node, delay-coupled to itself. Instead of the spatial topology of a network, subunits in the delay-coupled reservoir are multiplexed in time along one delay span of the system. The computational power of the reservoir is contingent on this temporal multiplexing. Here, we learn optimal temporal multiplexing by means of a biologically inspired homeostatic plasticity mechanism. Plasticity acts locally and changes the distances between the subunits along the delay, depending on how responsive these subunits are to the input. After analytically deriving the learning mechanism, we illustrate its role in improving the reservoir’s computational power. To this end, we investigate, first, the increase of the reservoir’s memory capacity. Second, we predict a NARMA-10 time series, showing that plasticity reduces the normalized root-mean-square error by more than 20%. Third, we discuss plasticity’s influence on the reservoir’s input-information capacity, the coupling strength between subunits, and the distribution of the readout coefficients.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (8): 1953–1993.
Published: 01 August 2013
FIGURES
| View All (10)
Abstract
View article
PDF
Although the existence of correlated spiking between neurons in a population is well known, the role such correlations play in encoding stimuli is not. We address this question by constructing pattern-based encoding models that describe how time-varying stimulus drive modulates the expression probabilities of population-wide spike patterns. The challenge is that large populations may express an astronomical number of unique patterns, and so fitting a unique encoding model for each individual pattern is not feasible. We avoid this combinatorial problem using a dimensionality-reduction approach based on regression trees. Using the insight that some patterns may, from the perspective of encoding, be statistically indistinguishable, the tree divisively clusters the observed patterns into groups whose member patterns possess similar encoding properties. These groups, corresponding to the leaves of the tree, are much smaller in number than the original patterns, and the tree itself constitutes a tractable encoding model for each pattern. Our formalism can detect an extremely weak stimulus-driven pattern structure and is based on maximizing the data likelihood, not making a priori assumptions as to how patterns should be grouped. Most important, by comparing pattern encodings with independent neuron encodings, one can determine if neurons in the population are driven independently or collectively. We demonstrate this method using multiple unit recordings from area 17 of anesthetized cat in response to a sinusoidal grating and show that pattern-based encodings are superior to those of independent neuron models. The agnostic nature of our clustering approach allows us to investigate encoding by the collective statistics that are actually present rather than those (such as pairwise) that might be presumed.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (5): 1123–1163.
Published: 01 May 2013
FIGURES
| View All (55)
Abstract
View article
PDF
The discussion whether temporally coordinated spiking activity really exists and whether it is relevant has been heated over the past few years. To investigate this issue, several approaches have been taken to determine whether synchronized events occur significantly above chance, that is, whether they occur more often than expected if the neurons fire independently. Most investigations ignore or destroy the autostructure of the spiking activity of individual cells or assume Poissonian spiking as a model. Such methods that ignore the autostructure can significantly bias the coincidence statistics. Here, we study the influence of the autostructure on the probability distribution of coincident spiking events between tuples of mutually independent non-Poisson renewal processes. In particular, we consider two types of renewal processes that were suggested as appropriate models of experimental spike trains: a gamma and a log-normal process. For a gamma process, we characterize the shape of the distribution analytically with the Fano factor ( FF c ) . In addition, we perform Monte Carlo estimations to derive the full shape of the distribution and the probability for false positives if a different process type is assumed as was actually present. We also determine how manipulations of such spike trains, here dithering, used for the generation of surrogate data change the distribution of coincident events and influence the significance estimation. We find, first, that the width of the coincidence count distribution and its FF c depend critically and in a nontrivial way on the detailed properties of the structure of the spike trains as characterized by the coefficient of variation C V . Second, the dependence of the FF c on the C V is complex and mostly nonmonotonic. Third, spike dithering, even if as small as a fraction of the interspike interval, can falsify the inference on coordinated firing.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2012) 24 (10): 2543–2578.
Published: 01 October 2012
FIGURES
| View All (23)
Abstract
View article
PDF
The moving bar experiment is a classic paradigm for characterizing the receptive field (RF) properties of neurons in primary visual cortex (V1). Current approaches for analyzing neural spiking activity recorded from these experiments do not take into account the point-process nature of these data and the circular geometry of the stimulus presentation. We present a novel analysis approach to mapping V1 receptive fields that combines point-process generalized linear models (PPGLM) with tomographic reconstruction computed by filtered-back projection. We use the method to map the RF sizes and orientations of 251 V1 neurons recorded from two macaque monkeys during a moving bar experiment. Our cross-validated goodness-of-fit analyses show that the PPGLM provides a more accurate characterization of spike train data than analyses based on rate functions computed by the methods of spike-triggered averages or first-order Wiener-Volterra kernel. Our analysis leads to a new definition of RF size as the spatial area over which the spiking activity is significantly greater than baseline activity. Our approach yields larger RF sizes and sharper orientation tuning estimates. The tomographic reconstruction paradigm further suggests an efficient approach to choosing the number of directions and the number of trials per direction in designing moving bar experiments. Our results demonstrate that standard tomographic principles for image reconstruction can be adapted to characterize V1 RFs and that two fundamental properties, size and orientation, may be substantially different from what is currently reported.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Neural Computation (2011) 23 (6): 1452–1483.
Published: 01 June 2011
FIGURES
| View All (19)
Abstract
View article
PDF
Statistical models of neural activity are integral to modern neuroscience. Recently interest has grown in modeling the spiking activity of populations of simultaneously recorded neurons to study the effects of correlations and functional connectivity on neural information processing. However, any statistical model must be validated by an appropriate goodness-of-fit test. Kolmogorov-Smirnov tests based on the time-rescaling theorem have proven to be useful for evaluating point-process-based statistical models of single-neuron spike trains. Here we discuss the extension of the time-rescaling theorem to the multivariate (neural population) case. We show that even in the presence of strong correlations between spike trains, models that neglect couplings between neurons can be erroneously passed by the univariate time-rescaling test. We present the multivariate version of the time-rescaling theorem and provide a practical step-by-step procedure for applying it to testing the sufficiency of neural population models. Using several simple analytically tractable models and more complex simulated and real data sets, we demonstrate that important features of the population activity can be detected only using the multivariate extension of the test.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (10): 2477–2506.
Published: 01 October 2010
FIGURES
| View All (11)
Abstract
View article
PDF
One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.