Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
Konrad P. Körding
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (8): 1751–1759.
Published: 01 August 2003
Abstract
View article
PDF
Learning in neural networks is usually applied to parameters related to linear kernels and keeps the nonlinearity of the model fixed. Thus, for successful models, properties and parameters of the nonlinearity have to be specified using a priori knowledge, which often is missing. Here, we investigate adapting the nonlinearity simultaneously with the linear kernel. We use natural visual stimuli for training a simple model of the visual system. Many of the neurons converge to an energy detector matching existing models of complex cells. The overall distribution of the parameter describing the nonlinearity well matches recent physiological results. Controls with randomly shuffled natural stimuli and pink noise demonstrate that the match of simulation and experimental results depends on the higher-order statistical properties of natural stimuli.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (12): 2823–2849.
Published: 01 December 2001
Abstract
View article
PDF
Neurons in mammalian cerebral cortex combine specific responses with respect to some stimulus features with invariant responses to other stimulus features. For example, in primary visual cortex, complex cells code for orientation of a contour but ignore its position to a certain degree. In higher areas, such as the inferotemporal cortex, translation-invariant, rotation-invariant, and even view point-invariant responses can be observed. Such properties are of obvious interest to artificial systems performing tasks like pattern recognition. It remains to be resolved how such response properties develop in biological systems. Here we present an unsupervised learning rule that addresses this problem. It is based on a neuron model with two sites of synaptic integration, allowing qualitatively different effects of input to basal and apical dendritic trees, respectively. Without supervision, the system learns to extract invariance properties using temporal or spatial continuity of stimuli. Furthermore, top-down information can be smoothly integrated in the same framework. Thus, this model lends a physiological implementation to approaches of unsupervised learning of invariant-response properties.