Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-3 of 3
ShiNung Ching
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2019) 31 (5): 943–979.
Published: 01 May 2019
FIGURES
| View All (11)
Abstract
View article
PDF
A key aspect of the neural coding problem is understanding how representations of afferent stimuli are built through the dynamics of learning and adaptation within neural networks. The infomax paradigm is built on the premise that such learning attempts to maximize the mutual information between input stimuli and neural activities. In this letter, we tackle the problem of such information-based neural coding with an eye toward two conceptual hurdles. Specifically, we examine and then show how this form of coding can be achieved with online input processing. Our framework thus obviates the biological incompatibility of optimization methods that rely on global network awareness and batch processing of sensory signals. Central to our result is the use of variational bounds as a surrogate objective function, an established technique that has not previously been shown to yield online policies. We obtain learning dynamics for both linear-continuous and discrete spiking neural encoding models under the umbrella of linear gaussian decoders. This result is enabled by approximating certain information quantities in terms of neuronal activity via pairwise feedback mechanisms. Furthermore, we tackle the problem of how such learning dynamics can be realized with strict energetic constraints. We show that endowing networks with auxiliary variables that evolve on a slower timescale can allow for the realization of saddle-point optimization within the neural dynamics, leading to neural codes with favorable properties in terms of both information and energy.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2017) 29 (9): 2528–2552.
Published: 01 September 2017
FIGURES
| View All (10)
Abstract
View article
PDF
We consider the problem of optimizing information-theoretic quantities in recurrent networks via synaptic learning. In contrast to feedforward networks, the recurrence presents a key challenge insofar as an optimal learning rule must aggregate the joint distribution of the whole network. This challenge, in particular, makes a local policy (i.e., one that depends on only pairwise interactions) difficult. Here, we report a local metaplastic learning rule that performs approximate optimization by estimating whole-network statistics through the use of several slow, nested dynamical variables. These dynamics provide the rule with both anti-Hebbian and Hebbian components, thus allowing for decorrelating and correlating learning regimes that can occur when either is favorable for optimality. We demonstrate the performance of the synthesized rule in comparison to classical BCM dynamics and use the networks to conduct history-dependent tasks that highlight the advantages of recurrence. Finally, we show the consistency of the resultant learned networks with notions of criticality, including balanced ratios of excitation and inhibition.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2016) 28 (9): 1889–1926.
Published: 01 September 2016
FIGURES
| View All (67)
Abstract
View article
PDF
A well-known phenomenon in sensory perception is desensitization, wherein behavioral responses to persistent stimuli become attenuated over time. In this letter, our focus is on studying mechanisms through which desensitization may be mediated at the network level and, specifically, how sensitivity changes arise as a function of long-term plasticity. Our principal object of study is a generic isoinhibitory motif: a small excitatory-inhibitory network with recurrent inhibition. Such a motif is of interest due to its overrepresentation in laminar sensory network architectures. Here, we introduce a sensitivity analysis derived from control theory in which we characterize the fixed-energy reachable set of the motif. This set describes the regions of the phase-space that are more easily (in terms of stimulus energy) accessed, thus providing a holistic assessment of sensitivity. We specifically focus on how the geometry of this set changes due to repetitive application of a persistent stimulus. We find that for certain motif dynamics, this geometry contracts along the stimulus orientation while expanding in orthogonal directions. In other words, the motif not only desensitizes to the persistent input, but heightens its responsiveness (sensitizes) to those that are orthogonal. We develop a perturbation analysis that links this sensitization to both plasticity-induced changes in synaptic weights and the intrinsic dynamics of the network, highlighting that the effect is not purely due to weight-dependent disinhibition. Instead, this effect depends on the relative neuronal time constants and the consequent stimulus-induced drift that arises in the motif phase-space. For tightly distributed (but random) parameter ranges, sensitization is quite generic and manifests in larger recurrent E-I networks within which the motif is embedded.