Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-6 of 6
Tomoki Fukai
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2008) 20 (1): 227–251.
Published: 01 January 2008
Abstract
View article
PDF
The ability to make a correct choice of behavior from various options is crucial for animals' survival. The neural basis for the choice of behavior has been attracting growing attention in research on biological and artificial neural systems. Alternative choice tasks with variable ratio (VR) and variable interval (VI) schedules of reinforcement have often been employed in studying decision making by animals and humans. In the VR schedule task, alternative choices are reinforced with different probabilities, and subjects learn to select the behavioral response rewarded more frequently. In the VI schedule task, alternative choices are reinforced at different average intervals independent of the choice frequencies, and the choice behavior follows the so-called matching law. The two policies appear robustly in subjects' choice of behavior, but the underlying neural mechanisms remain unknown. Here, we show that these seemingly different policies can appear from a common computational algorithm known as actor-critic learning. We present experimentally testable variations of the VI schedule in which the matching behavior gives only a suboptimal solution to decision making and show that the actor-critic system exhibits the matching behavior in the steady state of the learning even when the matching behavior is suboptimal. However, it is found that the matching behavior can earn approximately the same reward as the optimal one in many practical situations.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (9): 2179–2198.
Published: 01 September 2003
Abstract
View article
PDF
Fast-spiking (FS) interneurons have specific types (Kv3.1/3.2 type) of the delayed potassium channel, which differ from the conventional Hodgkin-Huxley (HH) type potassium channel (Kv1.3 type) in several aspects. In this study, we show dramatic effects of the Kv3.1/3.2 potassium channel on the synchronization of the FS interneurons. We show analytically that two identical electrically coupled FS interneurons modeled with Kv3.1/3.2 channel fire synchronously at arbitrary firing frequencies, unlike similarly coupled FS neurons modeled with Kv1.3 channel that show frequency-dependent synchronous and antisynchronous firing states. Introducing GABA A receptor-mediated synaptic connections into an FS neuron pair tends to induce an antisynchronous firing state, even if the chemical synapses are bidirectional. Accordingly, an FS neuron pair connected simultaneously by electrical and chemical synapses achieves both synchronous firing state and antisynchronous firing state in a physiologically plausible range of the conductance ratio between electrical and chemical synapses. Moreover, we find that a large-scale network of FS interneurons connected by gap junctions and bidirectional GABAergic synapses shows similar bistability in the range of gamma frequencies (30–70 Hz).
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (5): 1035–1061.
Published: 01 May 2003
Abstract
View article
PDF
Much evidence indicates that synchronized gamma-frequency (20–70 Hz) oscillation plays a significant functional role in the neocortex and hippocampus. Chattering neuron is a possible neocortical pacemaker for the gamma oscillation. Based on our recent model of chattering neurons, here we study how gamma-frequency bursting is synchronized in a network of these neurons. Using a phase oscillator description, we first examine how two coupled chattering neurons are synchronized. The analysis reveals that an incremental change of the bursting mode, such as from singlet to doublet, always accompanies a rapid transition from antisynchronous to synchronous firing. The state transition occurs regardless of what changes the bursting mode. Within each bursting mode, the neuronal activity undergoes a gradual change from synchrony to antisynchrony. Since the sensitivity to Ca 2+ and the maximum conductance of Ca 2+ -dependent cationic current as well as the intensity of input current systematically control the bursting mode, these quantities may be crucial for the regulation of the coherence of local cortical activity. Numerical simulations demonstrate that the modulations of the calcium sensitivity and the amplitude of the cationic current can induce rapid transitions between synchrony and asynchrony in a large-scale network of chattering neurons. The rapid synchronization of chattering neurons is shown to synchronize the activities of regular spiking pyramidal neurons at the gamma frequencies, as may be necessary for selective attention or binding processing in object recognition.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (3): 597–620.
Published: 01 March 2003
Abstract
View article
PDF
Synapses in various neural preparations exhibit spike-timing-dependent plasticity (STDP) with a variety of learning window functions. The window functions determine the magnitude and the polarity of synaptic change according to the time difference of pre- and postsynaptic spikes. Numerical experiments revealed that STDP learning with a single-expo nential window function resulted in a bimodal distribution of synaptic conductances as a consequence of competition between synapses. A slightly modified window function, however, resulted in a unimodal distribution rather than a bimodal distribution. Since various window functions have been observed in neural preparations, we develop a rigorous mathematical method to calculate the conductance distribution for any given window function. Our method is based on the Fokker-Planck equation to determine the conductance distribution and on the Ornstein-Uhlenbeck process to characterize the membrane potential fluctuations. Demonstrating that our method reproduces the known quantitative results of STDP learning, we apply the method to the type of STDP learning found recently in the CA1 region of the rat hippocampus. We find that this learning can result in nearly optimized competition between synapses. Meanwhile, we find that the type of STDP learning found in the cerebellum-like structure of electric fish can result in all-or-none synapses: either all the synaptic conductances are maximized, or none of them becomes significantly large. Our method also determines the window function that optimizes synaptic competition.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1997) 9 (1): 77–97.
Published: 01 January 1997
Abstract
View article
PDF
A neuroecological equation of the Lotka-Volterra type for mean firing rate is derived from the conventional membrane dynamics of a neural network with lateral inhibition and self-inhibition. Neural selection mechanisms employed by the competitive neural network receiving external input sare studied with analytic and numerical calculations. A remarkable finding is that the strength of lateral inhibition relative to that of self-inhibition is crucial for determining the steady states of the network among three qualitatively different types of behavior. Equal strength of both types of inhibitory connections leads the network to the well-known winner-take-all behavior. If, however, the lateral inhibition is weaker than the self-inhibition, a certain number of neurons are activated in the steady states or the number of winners is in general more than one (the winners-share-all behavior). On the other hand, if the self-inhibition is weaker than the lateral one, only one neuron is activated, but the winner is not necessarily the neuron receiving the largest input. It is suggested that our simple network model provides a mathematical basis for understanding neural selection mechanisms.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1995) 7 (3): 529–548.
Published: 01 May 1995
Abstract
View article
PDF
It is shown that approximate fixed-point attractors rather than synchronized oscillations can be employed by a wide class of neural networks of oscillators to achieve an associative memory recall. This computational ability of oscillator neural networks is ensured by the fact that reduced dynamic equations for phase variables in general involve two terms that can be respectively responsible for the emergence of synchronization and cessation of oscillations. Thus the cessation occurs in memory retrieval if the corresponding term dominates in the dynamic equations. A bottomless feature of the energy function for such a system makes the retrieval states quasi-fixed points, which admit continual rotating motion to a small portion of oscillators, when an extensive number of memory patterns are embedded. An approximate theory based on the self-consistent signal-to-noise analysis enables one to study the equilibrium properties of the neural network of phase variables with the quasi-fixed-point attractors. As far as the memory retrieval by the quasi-fixed points is concerned, the equilibrium properties including the storage capacity of oscillator neural networks are proved to be similar to those of the Hopfield type neural networks.