Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
Sandro Romani
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2021) 33 (3): 827–852.
Published: 01 March 2021
FIGURES
Abstract
View article
PDF
Empirical estimates of the dimensionality of neural population activity are often much lower than the population size. Similar phenomena are also observed in trained and designed neural network models. These experimental and computational results suggest that mapping low-dimensional dynamics to high-dimensional neural space is a common feature of cortical computation. Despite the ubiquity of this observation, the constraints arising from such mapping are poorly understood. Here we consider a specific example of mapping low-dimensional dynamics to high-dimensional neural activity—the neural engineering framework. We analytically solve the framework for the classic ring model—a neural network encoding a static or dynamic angular variable. Our results provide a complete characterization of the success and failure modes for this model. Based on similarities between this and other frameworks, we speculate that these results could apply to more general scenarios.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (10): 2523–2544.
Published: 01 October 2013
FIGURES
| View All (5)
Abstract
View article
PDF
Most people have great difficulty in recalling unrelated items. For example, in free recall experiments, lists of more than a few randomly selected words cannot be accurately repeated. Here we introduce a phenomenological model of memory retrieval inspired by theories of neuronal population coding of information. The model predicts nontrivial scaling behaviors for the mean and standard deviation of the number of recalled words for lists of increasing length. Our results suggest that associative information retrieval is a dominating factor that limits the number of recalled items.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Neural Computation (2011) 23 (3): 651–655.
Published: 01 March 2011
FIGURES
Abstract
View article
PDF
The pattern of spikes recorded from place cells in the rodent hippocampus is strongly modulated by both the spatial location in the environment and the theta rhythm. The phases of the spikes in the theta cycle advance during movement through the place field. Recently intracellular recordings from hippocampal neurons (Harvey, Collman, Dombeck, & Tank, 2009 ) showed an increase in the amplitude of membrane potential oscillations inside the place field, which was interpreted as evidence that an intracellular mechanism caused phase precession. Here we show that an existing network model of the hippocampus (Tsodyks, Skaggs, Sejnowski, & McNaughton, 1996 ) can equally reproduce this and other aspects of the intracellular recordings, which suggests that new experiments are needed to distinguish the contributions of intracellular and network mechanisms to phase precession.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2008) 20 (8): 1928–1950.
Published: 01 August 2008
Abstract
View article
PDF
A network of excitatory synapses trained with a conservative version of Hebbian learning is used as a model for recognizing the familiarity of thousands of once-seen stimuli from those never seen before. Such networks were initially proposed for modeling memory retrieval (selective delay activity). We show that the same framework allows the incorporation of both familiarity recognition and memory retrieval, and estimate the network's capacity. In the case of binary neurons, we extend the analysis of Amit and Fusi (1994) to obtain capacity limits based on computations of signal-to-noise ratio of the field difference between selective and non-selective neurons of learned signals. We show that with fast learning (potentiation probability approximately 1), the most recently learned patterns can be retrieved in working memory (selective delay activity). A much higher number of once-seen learned patterns elicit a realistic familiarity signal in the presence of an external field. With potentiation probability much less than 1 (slow learning), memory retrieval disappears, whereas familiarity recognition capacity is maintained at a similarly high level. This analysis is corroborated in simulations. For analog neurons, where such analysis is more difficult, we simplify the capacity analysis by studying the excess number of potentiated synapses above the steady-state distribution. In this framework, we derive the optimal constraint between potentiation and depression probabilities that maximizes the capacity.