Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-18 of 18
Kazuyuki Aihara
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2023) 35 (8): 1430–1462.
Published: 12 July 2023
FIGURES
| View All (10)
Abstract
View article
PDF
We examine the efficiency of information processing in a balanced excitatory and inhibitory (E-I) network during the developmental critical period, when network plasticity is heightened. A multimodule network composed of E-I neurons was defined, and its dynamics were examined by regulating the balance between their activities. When adjusting E-I activity, both transitive chaotic synchronization with a high Lyapunov dimension and conventional chaos with a low Lyapunov dimension were found. In between, the edge of high-dimensional chaos was observed. To quantify the efficiency of information processing, we applied a short-term memory task in reservoir computing to the dynamics of our network. We found that memory capacity was maximized when optimal E-I balance was realized, underscoring both its vital role and vulnerability during critical periods of brain development.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2014) 26 (11): 2395–2418.
Published: 01 November 2014
FIGURES
| View All (45)
Abstract
View article
PDF
We study dynamical mechanisms responsible for changes of the firing rate during four different bifurcation transitions in the two-dimensional Hindmarsh-Rose (2DHR) neuron model: the saddle node on an invariant circle (SNIC) bifurcation to the supercritical Andronov-Hopf (AH) one, the SNIC bifurcation to the saddle-separatrix loop (SSL) one, the AH bifurcation to the subcritical AH (SAH) one, and the SSL bifurcation to the AH one. For this purpose, we study slopes of the firing rate curve with respect to not only an external input current but also temperature that can be interpreted as a timescale in the 2DHR neuron model. These slopes are mathematically formulated with phase response curves (PRCs), expanding the firing rate with perturbations of the temperature and external input current on the one-dimensional space of the phase in the 2DHR oscillator. By analyzing the two different slopes of the firing rate curve with respect to the temperature and external input current, we find that during changes of the firing rate in all of the bifurcation transitions, the calculated slope with respect to the temperature also changes. This is largely dependent on changes in the PRC size that is also related to the slope with respect to the external input current. Furthermore, we find phase transition–like switches of the firing rate with a possible increase of the temperature during the SSL-to-AH bifurcation transition.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (11): 3131–3182.
Published: 01 December 2013
FIGURES
| View All (71)
Abstract
View article
PDF
We study a realistic model of a cortical column taking into account short-term plasticity between pyramidal cells and interneurons. The simulation of leaky integrate-and-fire neurons shows that low-frequency oscillations emerge spontaneously as a result of intrinsic network properties. These oscillations are composed of prolonged phases of high and low activity reminiscent of cortical up and down states, respectively. We simplify the description of the network activity by using a mean field approximation and reduce the system to two slow variables exhibiting some relaxation oscillations. We identify two types of slow oscillations. When the combination of dynamic synapses between pyramidal cells and those between interneurons accounts for the generation of these slow oscillations, the end of the up phase is characterized by asynchronous fluctuations of the membrane potentials. When the slow oscillations are mainly driven by the dynamic synapses between interneurons, the network exhibits fluctuations of membrane potentials, which are more synchronous at the end than at the beginning of the up phase. Additionally, finite size effect and slow synaptic currents can modify the irregularity and frequency, respectively, of these oscillations. Finally, we consider possible roles of a slow oscillatory input modeling long-range interactions in the brain. Spontaneous slow oscillations of local networks are modulated by the oscillatory input, which induces, notably, synchronization, subharmonic synchronization, and chaotic relaxation oscillations in the mean field approximation. In the case of forced oscillations, the slow population-averaged activity of leaky integrate-and-fire neurons can have both deterministic and stochastic temporal features. We discuss the possibility that long-range connectivity controls the emergence of slow sequential patterns in local populations due to the tendency of a cortical column to oscillate at low frequency.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2012) 24 (4): 1020–1046.
Published: 01 April 2012
FIGURES
| View All (11)
Abstract
View article
PDF
The dependence of the dynamics of pulse-coupled neural networks on random rewiring of excitatory and inhibitory connections is examined. When both excitatory and inhibitory connections are rewired, periodic synchronization emerges with a Hopf-like bifurcation and a subsequent period-doubling bifurcation; chaotic synchronization is also observed. When only excitatory connections are rewired, periodic synchronization emerges with a saddle node–like bifurcation, and chaotic synchronization is also observed. This result suggests that randomness in the system does not necessarily contaminate the system, and sometimes it even introduces rich dynamics to the system such as chaos.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (5): 1383–1398.
Published: 01 May 2010
FIGURES
| View All (6)
Abstract
View article
PDF
The roles of inhibitory neurons in synchronous firing are examined in a network of excitatory and inhibitory neurons with Watts and Strogatz's rewiring. By examining the persistence of the synchronous firing that exists in the random network, it was found that there is a probability of rewiring at which a transition between the synchronous state and the asynchronous state takes place, and the dynamics of the inhibitory neurons play an important role in determining this probability.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2008) 20 (8): 1951–1972.
Published: 01 August 2008
Abstract
View article
PDF
The synchronous firing of neurons in a pulse-coupled neural network composed of excitatory and inhibitory neurons is analyzed. The neurons are connected by both chemical synapses and electrical synapses among the inhibitory neurons. When electrical synapses are introduced, periodically synchronized firing as well as chaotically synchronized firing is widely observed. Moreover, we find stochastic synchrony where the ensemble-averaged dynamics shows synchronization in the network but each neuron has a low firing rate and the firing of the neurons seems to be stochastic. Stochastic synchrony of chaos corresponding to a chaotic attractor is also found.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (12): 3335–3355.
Published: 01 December 2007
Abstract
View article
PDF
We study a computational model of audiovisual integration by setting a Bayesian observer that localizes visual and auditory stimuli without presuming the binding of audiovisual information. The observer adopts the maximum a posteriori approach to estimate the physically delivered position or timing of presented stimuli, simultaneously judging whether they are from the same source or not. Several experimental results on the perception of spatial unity and the ventriloquism effect can be explained comprehensively if the subjects in the experiments are regarded as Bayesian observers who try to accurately locate the stimulus. Moreover, by adaptively changing the inner representation of the Bayesian observer in terms of experience, we show that our model reproduces perceived spatial frame shifts due to the audiovisual adaptation known as the ventriloquism aftereffect.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (9): 2468–2491.
Published: 01 September 2007
Abstract
View article
PDF
Repetitions of precise spike patterns observed both in vivo and in vitro have been reported for more than a decade. Studies on the spike volley (a pulse packet) propagating through a homogeneous feedforward network have demonstrated its capability of generating spike patterns with millisecond fidelity. This model is called the synfire chain and suggests a possible mechanism for generating repeated spike patterns (RSPs). The propagation speed of the pulse packet determines the temporal property of RSPs. However, the relationship between propagation speed and network structure is not well understood. We studied a feedforward network with Mexican-hat connectivity by using the leaky integrate-and-fire neuron model and analyzed the network dynamics with the Fokker-Planck equation. We examined the effect of the spatial pattern of pulse packets on RSPs in the network with multistability. Pulse packets can take spatially uniform or localized shapes in a multistable regime, and they propagate with different speeds. These distinct pulse packets generate RSPs with different timescales, but the order of spikes and the ratios between interspike intervals are preserved. This result indicates that the RSPs can be transformed into the same template pattern through the expanding or contracting operation of the timescale.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (7): 1854–1870.
Published: 01 July 2007
Abstract
View article
PDF
With spatially organized neural networks, we examined how bias and noise inputs with spatial structure result in different network states such as bumps, localized oscillations, global oscillations, and localized synchronous firing that may be relevant to, for example, orientation selectivity. To this end, we used networks of McCulloch-Pitts neurons, which allow theoretical predictions, and verified the obtained results with numerical simulations. Spatial inputs, no matter whether they are bias inputs or shared noise inputs, affect only firing activities with resonant spatial frequency. The component of noise that is independent for different neurons increases the linearity of the neural system and gives rise to less spatial mode mixing and less bistability of population activities.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (7): 1798–1853.
Published: 01 July 2007
Abstract
View article
PDF
Inspired by recent studies regarding dendritic computation, we constructed a recurrent neural network model incorporating dendritic lateral inhibition. Our model consists of an input layer and a neuron layer that includes excitatory cells and an inhibitory cell; this inhibitory cell is activated by the pooled activities of all the excitatory cells, and it in turn inhibits each dendritic branch of the excitatory cells that receive excitations from the input layer. Dendritic nonlinear operation consisting of branch-specifically rectified inhibition and saturation is described by imposing nonlinear transfer functions before summation over the branches. In this model with sufficiently strong recurrent excitation, on transiently presenting a stimulus that has a high correlation with feed- forward connections of one of the excitatory cells, the corresponding cell becomes highly active, and the activity is sustained after the stimulus is turned off, whereas all the other excitatory cells continue to have low activities. But on transiently presenting a stimulus that does not have high correlations with feedforward connections of any of the excitatory cells, all the excitatory cells continue to have low activities. Interestingly, such stimulus-selective sustained response is preserved for a wide range of stimulus intensity. We derive an analytical formulation of the model in the limit where individual excitatory cells have an infinite number of dendritic branches and prove the existence of an equilibrium point corresponding to such a balanced low-level activity state as observed in the simulations, whose stability depends solely on the signal-to-noise ratio of the stimulus. We propose this model as a model of stimulus selectivity equipped with self-sustainability and intensity-invariance simultaneously, which was difficult in the conventional competitive neural networks with a similar degree of complexity in their network architecture. We discuss the biological relevance of the model in a general framework of computational neuroscience.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (3): 639–671.
Published: 01 March 2007
Abstract
View article
PDF
We studied the hypothesis that synaptic dynamics is controlled by three basic principles: (1) synapses adapt their weights so that neurons can effectively transmit information, (2) homeostatic processes stabilize the mean firing rate of the postsynaptic neuron, and (3) weak synapses adapt more slowly than strong ones, while maintenance of strong synapses is costly. Our results show that a synaptic update rule derived from these principles shares features, with spike-timing-dependent plasticity, is sensitive to correlations in the input and is useful for synaptic memory. Moreover, input selectivity (sharply tuned receptive fields) of postsynaptic neurons develops only if stimuli with strong features are presented. Sharply tuned neurons can coexist with unselective ones, and the distribution of synaptic weights can be unimodal or bimodal. The formulation of synaptic dynamics through an optimality criterion provides a simple graphical argument for the stability of synapses, necessary for synaptic memory.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (10): 2139–2175.
Published: 01 October 2005
Abstract
View article
PDF
Oscillatory and synchronized neural activities are commonly found in the brain, and evidence suggests that many of them are caused by global feedback. Their mechanisms and roles in information processing have been discussed often using purely feedforward networks or recurrent networks with constant inputs. On the other hand, real recurrent neural networks are abundant and continually receive information-rich inputs from the outside environment or other parts of the brain. We examine how feedforward networks of spiking neurons with delayed global feedback process information about temporally changing inputs. We show that the network behavior is more synchronous as well as more correlated with and phase-locked to the stimulus when the stimulus frequency is resonant with the inherent frequency of the neuron or that of the network oscillation generated by the feedback architecture. The two eigenmodes have distinct dynamical characteristics, which are supported by numerical simulations and by analytical arguments based on frequency response and bifurcation theory. This distinction is similar to the class I versus class II classification of single neurons according to the bifurcation from quiescence to periodic firing, and the two modes depend differently on system parameters. These two mechanisms may be associated with different types of information processing.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (9): 2034–2059.
Published: 01 September 2005
Abstract
View article
PDF
We report on deterministic and stochastic evolutions of firing states through a feedforward neural network with Mexican-hat-type connectivity. The prevalence of columnar structures in a cortex implies spatially localized connectivity between neural pools. Although feedforward neural network models with homogeneous connectivity have been intensively studied within the context of the synfire chain, the effect of local connectivity has not yet been studied so thoroughly. When a neuron fires independently, the dynamics of macroscopic state variables (a firing rate and spatial eccentricity of a firing pattern) is deterministic from the law of large numbers. Possible stable firing states, which are derived from deterministic evolution equations, are uniform, localized, and nonfiring. The multistability of these three states is obtained where the excitatory and inhibitory interactions among neurons are balanced. When the presynapse-dependent variance in connection efficacies is incorporated into the network, the variance generates common noise. Then the evolution of the macroscopic state variables becomes stochastic, and neurons begin to fire in a correlated manner due to the common noise. The correlation structure that is generated by common noise exhibits a nontrivial bimodal distribution. The development of a firing state through neural layers does not converge to a certain fixed point but keeps on fluctuating.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2004) 16 (3): 627–663.
Published: 01 March 2004
Abstract
View article
PDF
It has been a matter of debate how firing rates or spatiotemporal spike patterns carry information in the brain. Recent experimental and theoretical work in part showed that these codes, especially a population rate code and a synchronous code, can be dually used in a single architecture. However, we are not yet able to relate the role of firing rates and synchrony to the spatiotemporal structure of inputs and the architecture of neural networks. In this article, we examine how feedforward neural networks encode multiple input sources in the firing patterns. We apply spike-time-dependent plasticity as a fundamental mechanism to yield synaptic competition and the associated input filtering. We use the Fokker-Planck formalism to analyze the mechanism for synaptic competition in the case of multiple inputs, which underlies the formation of functional clusters in downstream layers in a self-organizing manner. Depending on the types of feedback coupling and shared connectivity, clusters are independently engaged in population rate coding or synchronous coding, or they interact to serve as input filters. Classes of dual codings and functional roles of spike-time-dependent plasticity are also discussed.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (6): 1341–1372.
Published: 01 June 2003
Abstract
View article
PDF
Neuronal information processing is often studied on the basis of spiking patterns. The relevant statistics such as firing rates calculated with the peri-stimulus time histogram are obtained by averaging spiking patterns over many experimental runs. However, animals should respond to one experimental stimulation in real situations, and what is available to the brain is not the trial statistics but the population statistics. Consequently, physiological ergodicity, namely, the consistency between trial averaging and population averaging, is implicitly assumed in the data analyses, although it does not trivially hold true. In this letter, we investigate how characteristics of noisy neural network models, such as single neuron properties, external stimuli, and synaptic inputs, affect the statistics of firing patterns. In particular, we show that how high membrane potential sensitivity to input fluctuations, inability of neurons to remember past inputs, external stimuli with large variability and temporally separated peaks, and relatively few contributions of synaptic inputs result in spike trains that are reproducible over many trials. The reproducibility of spike trains and synchronous firing are contrasted and related to the ergodicity issue. Several numerical calculations with neural network examples are carried out to support the theoretical results.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (1): 103–125.
Published: 01 January 2003
Abstract
View article
PDF
A functional role for precise spike timing has been proposed as an alternative hypothesis to rate coding. We show in this article that both the synchronous firing code and the population rate code can be used dually in a common framework of a single neural network model. Furthermore, these two coding mechanisms are bridged continuously by several modulatable model parameters, including shared connectivity, feedback strength, membrane leak rate, and neuron heterogeneity. The rates of change of these parameters are closely related to the response time and the timescale of learning.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (7): 1599–1628.
Published: 01 July 2002
Abstract
View article
PDF
Interspike intervals of spikes emitted from an integrator neuron model of sensory neurons can encode input information represented as a continuous signal from a deterministic system. If a real brain uses spike timing as a means of information processing, other neurons receiving spatiotemporal spikes from such sensory neurons must also be capable of treating information included in deterministic interspike intervals. In this article, we examine functions of neurons modeling cortical neurons receiving spatiotemporal spikes from many sensory neurons. We show that such neuron models can encode stimulus information passed from the sensory model neurons in the form of interspike intervals. Each sensory neuron connected to the cortical neuron contributes equally to the information collection by the cortical neuron. Although the incident spike train to the cortical neuron is a superimposition of spike trains from many sensory neurons, it need not be decomposed into spike trains according to the input neurons. These results are also preserved for generalizations of sensory neurons such as a small amount of leak, noise, inhomogeneity in firing rates, or biases introduced in the phase distributions.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (12): 2799–2822.
Published: 01 December 2001
Abstract
View article
PDF
Although various means of information representation in the cortex have been considered, the fundamental mechanism for such representation is not well understood. The relation between neural network dynamics and properties of information representation needs to be examined. We examined spatial pattern properties of mean firing rates and spatiotemporal spikes in an interconnected spiking neural network model. We found that whereas the spatiotemporal spike patterns are chaotic and unstable, the spatial patterns of mean firing rates (SPMFR) are steady and stable, reflecting the internal structure of synaptic weights. Interestingly, the chaotic instability contributes to fast stabilization of the SPMFR. Findings suggest that there are two types of network dynamics behind neuronal spiking: internally-driven dynamics and externally driven dynamics. When the internally driven dynamics dominate, spikes are relatively more chaotic and independent of external inputs; the SPMFR are steady and stable. When the externally driven dynamics dominate, the spiking patterns are relatively more dependent on the spatiotemporal structure of external inputs. These emergent properties of information representation imply that the brain may adopt a dual coding system. Recent experimental data suggest that internally driven and externally driven dynamics coexist and work together in the cortex.