Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-7 of 7
Toshio Aoyagi
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2012) 24 (10): 2700–2725.
Published: 01 October 2012
FIGURES
| View All (17)
Abstract
View article
PDF
We propose a new principle for replicating receptive field properties of neurons in the primary visual cortex. We derive a learning rule for a feedforward network, which maintains a low firing rate for the output neurons (resulting in temporal sparseness) and allows only a small subset of the neurons in the network to fire at any given time (resulting in population sparseness). Our learning rule also sets the firing rates of the output neurons at each time step to near-maximum or near-minimum levels, resulting in neuronal reliability. The learning rule is simple enough to be written in spatially and temporally local forms. After the learning stage is performed using input image patches of natural scenes, output neurons in the model network are found to exhibit simple-cell-like receptive field properties. When the output of these simple-cell-like neurons are input to another model layer using the same learning rule, the second-layer output neurons after learning become less sensitive to the phase of gratings than the simple-cell-like input neurons. In particular, some of the second-layer output neurons become completely phase invariant, owing to the convergence of the connections from first-layer neurons with similar orientation selectivity to second-layer neurons in the model network. We examine the parameter dependencies of the receptive field properties of the model neurons after learning and discuss their biological implications. We also show that the localized learning rule is consistent with experimental results concerning neuronal plasticity and can replicate the receptive fields of simple and complex cells.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (4): 1038–1067.
Published: 01 April 2009
FIGURES
| View All (12)
Abstract
View article
PDF
Recently multineuronal recording has allowed us to observe patterned firings, synchronization, oscillation, and global state transitions in the recurrent networks of central nervous systems. We propose a learning algorithm based on the process of information maximization in a recurrent network, which we call recurrent infomax (RI). RI maximizes information retention and thereby minimizes information loss through time in a network. We find that feeding in external inputs consisting of information obtained from photographs of natural scenes into an RI-based model of a recurrent network results in the appearance of Gabor-like selectivity quite similar to that existing in simple cells of the primary visual cortex. We find that without external input, this network exhibits cell assembly–like and synfire chain–like spontaneous activity as well as a critical neuronal avalanche. In addition, we find that RI embeds externally input temporal firing patterns to the network so that it spontaneously reproduces these patterns after learning. RI provides a simple framework to explain a wide range of phenomena observed in in vivo and in vitro neuronal networks, and it will provide a novel understanding of experimental results for multineuronal activity and plasticity from an information-theoretic point of view.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (10): 2720–2738.
Published: 01 October 2007
Abstract
View article
PDF
Although context-dependent spike synchronization among populations of neurons has been experimentally observed, its functional role remains controversial. In this modeling study, we demonstrate that in a network of spiking neurons organized according to spike-timing-dependent plasticity, an increase in the degree of synchrony of a uniform input can cause transitions between memorized activity patterns in the order presented during learning. Furthermore, context-dependent transitions from a single pattern to multiple patterns can be induced under appropriate learning conditions. These findings suggest one possible functional role of neuronal synchrony in controlling the flow of information by altering the dynamics of the network.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (9): 2515–2535.
Published: 01 September 2007
Abstract
View article
PDF
The self-organizing map (SOM) is an unsupervised learning method as well as a type of nonlinear principal component analysis that forms a topologically ordered mapping from the high-dimensional data space to a low-dimensional representation space. It has recently found wide applications in such areas as visualization, classification, and mining of various data. However, when the data sets to be processed are very large, a copious amount of time is often required to train the map, which seems to restrict the range of putative applications. One of the major culprits for this slow ordering time is that a kind of topological defect (e.g., a kink in one dimension or a twist in two dimensions) gets created in the map during training. Once such a defect appears in the map during training, the ordered map cannot be obtained until the defect is eliminated, for which the number of iterations required is typically several times larger than in the absence of the defect. In order to overcome this weakness, we propose that an asymmetric neighborhood function be used for the SOM algorithm. Compared with the commonly used symmetric neighborhood function, we found that an asymmetric neighborhood function accelerates the ordering process of the SOM algorithm, though this asymmetry tends to distort the generated ordered map. We demonstrate that the distortion of the map can be suppressed by improving the asymmetric neighborhood function SOM algorithm. The number of learning steps required for perfect ordering in the case of the one-dimensional SOM is numerically shown to be reduced from O ( N 3 ) to O ( N 2 ) with an asymmetric neighborhood function, even when the improved algorithm is used to get the final map without distortion.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (9): 2179–2198.
Published: 01 September 2003
Abstract
View article
PDF
Fast-spiking (FS) interneurons have specific types (Kv3.1/3.2 type) of the delayed potassium channel, which differ from the conventional Hodgkin-Huxley (HH) type potassium channel (Kv1.3 type) in several aspects. In this study, we show dramatic effects of the Kv3.1/3.2 potassium channel on the synchronization of the FS interneurons. We show analytically that two identical electrically coupled FS interneurons modeled with Kv3.1/3.2 channel fire synchronously at arbitrary firing frequencies, unlike similarly coupled FS neurons modeled with Kv1.3 channel that show frequency-dependent synchronous and antisynchronous firing states. Introducing GABA A receptor-mediated synaptic connections into an FS neuron pair tends to induce an antisynchronous firing state, even if the chemical synapses are bidirectional. Accordingly, an FS neuron pair connected simultaneously by electrical and chemical synapses achieves both synchronous firing state and antisynchronous firing state in a physiologically plausible range of the conductance ratio between electrical and chemical synapses. Moreover, we find that a large-scale network of FS interneurons connected by gap junctions and bidirectional GABAergic synapses shows similar bistability in the range of gamma frequencies (30–70 Hz).
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (5): 1035–1061.
Published: 01 May 2003
Abstract
View article
PDF
Much evidence indicates that synchronized gamma-frequency (20–70 Hz) oscillation plays a significant functional role in the neocortex and hippocampus. Chattering neuron is a possible neocortical pacemaker for the gamma oscillation. Based on our recent model of chattering neurons, here we study how gamma-frequency bursting is synchronized in a network of these neurons. Using a phase oscillator description, we first examine how two coupled chattering neurons are synchronized. The analysis reveals that an incremental change of the bursting mode, such as from singlet to doublet, always accompanies a rapid transition from antisynchronous to synchronous firing. The state transition occurs regardless of what changes the bursting mode. Within each bursting mode, the neuronal activity undergoes a gradual change from synchrony to antisynchrony. Since the sensitivity to Ca 2+ and the maximum conductance of Ca 2+ -dependent cationic current as well as the intensity of input current systematically control the bursting mode, these quantities may be crucial for the regulation of the coherence of local cortical activity. Numerical simulations demonstrate that the modulations of the calcium sensitivity and the amplitude of the cationic current can induce rapid transitions between synchrony and asynchrony in a large-scale network of chattering neurons. The rapid synchronization of chattering neurons is shown to synchronize the activities of regular spiking pyramidal neurons at the gamma frequencies, as may be necessary for selective attention or binding processing in object recognition.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1998) 10 (6): 1527–1546.
Published: 15 August 1998
Abstract
View article
PDF
We present an analytical approach that allows us to treat the long-time behavior of the recalling process in an oscillator neural network. It is well known that in coupled oscillatory neuronal systems, under suitable conditions, the original dynamics can be reduced to a simpler phase dynamics. In this description, the phases of the oscillators can be regarded as the timings of the neuronal spikes. To attempt an analytical treatment of the recalling dynamics of such a system, we study a simplified model in which we discretize time and assume a synchronous updating rule. The theoretical results show that the retrieval dynamics is described by recursion equations for some macroscopic parameters, such as an overlap with the retrieval pattern. We then treat the noise components in the local field, which arise from the learning of the unretrieved patterns, as gaussian variables. However, we take account of the temporal correlation between these noise components at different times. In particular, we find that this correlation is essential for correctly predicting the behavior of the retrieval process in the case of autoassociative memory. From the derived equations, the maximal storage capacity and the basin of attraction are calculated and graphically displayed. We also consider the more general case that the network retrieves an ordered sequence of phase patterns. In both cases, the basin of attraction remains sufficiently wide to recall the memorized pattern from a noisy one, even near saturation. The validity of these theoretical results is supported by numerical simulations. We believe that this model serves as a convenient starting point for the theoretical study of retrieval dynamics in general oscillatory systems.