Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
Henning Sprekeler
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2015) 27 (8): 1624–1672.
Published: 01 August 2015
FIGURES
| View All (83)
Abstract
View article
PDF
A place cell is a neuron that fires whenever the animal traverses a particular location of the environment—the place field of the cell. Place cells are found in two regions of the rodent hippocampus: CA3 and CA1. Motivated by the anatomical connectivity between these two regions and by the evidence for synaptic plasticity at these connections, we study how a place field in CA1 can be inherited from an upstream region such as CA3 through a Hebbian learning rule, in particular, through spike-timing-dependent plasticity (STDP). To this end, we model a population of CA3 place cells projecting to a single CA1 cell, and we assume that the CA1 input synapses are plastic according to STDP. With both numerical and analytical methods, we show that in the case of overlapping CA3 input place fields, the STDP learning rule leads to the formation of a place field in CA1. We then investigate the roles of the hippocampal theta modulation and phase precession on the inheritance process. We find that theta modulation favors the inheritance and leads to faster place field formation whereas phase precession changes the drift of CA1 place fields over time.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2011) 23 (12): 3287–3302.
Published: 01 December 2011
FIGURES
Abstract
View article
PDF
The past decade has seen a rise of interest in Laplacian eigenmaps (LEMs) for nonlinear dimensionality reduction. LEMs have been used in spectral clustering, in semisupervised learning, and for providing efficient state representations for reinforcement learning. Here, we show that LEMs are closely related to slow feature analysis (SFA), a biologically inspired, unsupervised learning algorithm originally designed for learning invariant visual representations. We show that SFA can be interpreted as a function approximation of LEMs, where the topological neighborhoods required for LEMs are implicitly defined by the temporal structure of the data. Based on this relation, we propose a generalization of SFA to arbitrary neighborhood relations and demonstrate its applicability for spectral clustering. Finally, we review previous work with the goal of providing a unifying view on SFA and LEMs.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2011) 23 (2): 303–335.
Published: 01 February 2011
FIGURES
| View All (5)
Abstract
View article
PDF
We develop a group-theoretical analysis of slow feature analysis for the case where the input data are generated by applying a set of continuous transformations to static templates. As an application of the theory, we analytically derive nonlinear visual receptive fields and show that their optimal stimuli, as well as the orientation and frequency tuning, are in good agreement with previous simulations of complex cells in primary visual cortex (Berkes and Wiskott, 2005 ). The theory suggests that side and end stopping can be interpreted as a weak breaking of translation invariance. Direction selectivity is also discussed.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2008) 20 (4): 1026–1041.
Published: 01 April 2008
Abstract
View article
PDF
Understanding the guiding principles of sensory coding strategies is a main goal in computational neuroscience. Among others, the principles of predictive coding and slowness appear to capture aspects of sensory processing. Predictive coding postulates that sensory systems are adapted to the structure of their input signals such that information about future inputs is encoded. Slow feature analysis (SFA) is a method for extracting slowly varying components from quickly varying input signals, thereby learning temporally invariant features. Here, we use the information bottleneck method to state an information-theoretic objective function for temporally local predictive coding. We then show that the linear case of SFA can be interpreted as a variant of predictive coding that maximizes the mutual information between the current output of the system and the input signal in the next time step. This demonstrates that the slowness principle and predictive coding are intimately related.