Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-3 of 3
Shigeru Tanaka
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (9): 2554–2580.
Published: 01 September 2009
FIGURES
| View All (7)
Abstract
View article
PDF
To date, Hebbian learning combined with some form of constraint on synaptic inputs has been demonstrated to describe well the development of neural networks. The previous models revealed mathematically the importance of synaptic constraints to reproduce orientation selectivity in the visual cortical neurons, but biological mechanisms underlying such constraints remain unclear. In this study, we addressed this issue by formulating a synaptic constraint based on activity-dependent mechanisms of synaptic changes. Particularly, considering metabotropic glutamate receptor-mediated long-term depression, we derived synaptic constraint that suppresses the number of inputs from individual presynaptic neurons. We performed computer simulations of the activity-dependent self-organization of geniculocortical inputs with the synaptic constraint and examined the formation of receptive fields (RFs) of model visual cortical neurons. When we changed the magnitude of the synaptic constraint, we found the emergence of distinct RF structures such as concentric RFs, simple-cell-like RFs, and double-oriented RFs and also a gradual transition between spatiotemporal separable and inseparable RFs. Thus, the model based on the synaptic constraint derived from biological consideration can account systematically for the repertoire of RF structures observed in the primary visual cortices of different species for the first time.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (5): 1032–1058.
Published: 01 May 2005
Abstract
View article
PDF
We studied a simple random recurrent inhibitory network. Despite its simplicity, the dynamics was so rich that activity patterns of neurons evolved with time without recurrence due to random recurrent connections among neurons. The sequence of activity patterns was generated by the trigger of an external signal, and the generation was stable against noise. Moreover, the same sequence was reproducible using a strong transient signal, that is, the sequence generation could be reset. Therefore, a time passage from the trigger of an external signal could be represented by the sequence of activity patterns, suggesting that this model could work as an internal clock. The model could generate different sequences of activity patterns by providing different external signals; thus, spatiotemporal information could be represented by this model. Moreover, it was possible to speed up and slow down the sequence generation.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1997) 9 (1): 77–97.
Published: 01 January 1997
Abstract
View article
PDF
A neuroecological equation of the Lotka-Volterra type for mean firing rate is derived from the conventional membrane dynamics of a neural network with lateral inhibition and self-inhibition. Neural selection mechanisms employed by the competitive neural network receiving external input sare studied with analytic and numerical calculations. A remarkable finding is that the strength of lateral inhibition relative to that of self-inhibition is crucial for determining the steady states of the network among three qualitatively different types of behavior. Equal strength of both types of inhibitory connections leads the network to the well-known winner-take-all behavior. If, however, the lateral inhibition is weaker than the self-inhibition, a certain number of neurons are activated in the steady states or the number of winners is in general more than one (the winners-share-all behavior). On the other hand, if the self-inhibition is weaker than the lateral one, only one neuron is activated, but the winner is not necessarily the neuron receiving the largest input. It is suggested that our simple network model provides a mathematical basis for understanding neural selection mechanisms.