Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-3 of 3
Kosuke Hamaguchi
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2008) 20 (4): 994–1025.
Published: 01 April 2008
Abstract
View article
PDF
Continuous attractor is a promising model for describing the encoding of continuous stimuli in neural systems. In a continuous attractor, the stationary states of the neural system form a continuous parameter space, on which the system is neutrally stable. This property enables the neutral system to track time-varying stimuli smoothly, but it also degrades the accuracy of information retrieval, since these stationary states are easily disturbed by external noise. In this work, based on a simple model, we systematically investigate the dynamics and the computational properties of continuous attractors. In order to analyze the dynamics of a large-size network, which is otherwise extremely complicated, we develop a strategy to reduce its dimensionality by utilizing the fact that a continuous attractor can eliminate the noise components perpendicular to the attractor space very quickly. We therefore project the network dynamics onto the tangent of the attractor space and simplify it successfully as a one-dimensional Ornstein-Uhlenbeck process. Based on this simplified model, we investigate (1) the decoding error of a continuous attractor under the driving of external noisy inputs, (2) the tracking speed of a continuous attractor when external stimulus experiences abrupt changes, (3) the neural correlation structure associated with the specific dynamics of a continuous attractor, and (4) the consequence of asymmetric neural correlation on statistical population decoding. The potential implications of these results on our understanding of neural information processing are also discussed.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (9): 2468–2491.
Published: 01 September 2007
Abstract
View article
PDF
Repetitions of precise spike patterns observed both in vivo and in vitro have been reported for more than a decade. Studies on the spike volley (a pulse packet) propagating through a homogeneous feedforward network have demonstrated its capability of generating spike patterns with millisecond fidelity. This model is called the synfire chain and suggests a possible mechanism for generating repeated spike patterns (RSPs). The propagation speed of the pulse packet determines the temporal property of RSPs. However, the relationship between propagation speed and network structure is not well understood. We studied a feedforward network with Mexican-hat connectivity by using the leaky integrate-and-fire neuron model and analyzed the network dynamics with the Fokker-Planck equation. We examined the effect of the spatial pattern of pulse packets on RSPs in the network with multistability. Pulse packets can take spatially uniform or localized shapes in a multistable regime, and they propagate with different speeds. These distinct pulse packets generate RSPs with different timescales, but the order of spikes and the ratios between interspike intervals are preserved. This result indicates that the RSPs can be transformed into the same template pattern through the expanding or contracting operation of the timescale.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (9): 2034–2059.
Published: 01 September 2005
Abstract
View article
PDF
We report on deterministic and stochastic evolutions of firing states through a feedforward neural network with Mexican-hat-type connectivity. The prevalence of columnar structures in a cortex implies spatially localized connectivity between neural pools. Although feedforward neural network models with homogeneous connectivity have been intensively studied within the context of the synfire chain, the effect of local connectivity has not yet been studied so thoroughly. When a neuron fires independently, the dynamics of macroscopic state variables (a firing rate and spatial eccentricity of a firing pattern) is deterministic from the law of large numbers. Possible stable firing states, which are derived from deterministic evolution equations, are uniform, localized, and nonfiring. The multistability of these three states is obtained where the excitatory and inhibitory interactions among neurons are balanced. When the presynapse-dependent variance in connection efficacies is incorporated into the network, the variance generates common noise. Then the evolution of the macroscopic state variables becomes stochastic, and neurons begin to fire in a correlated manner due to the common noise. The correlation structure that is generated by common noise exhibits a nontrivial bimodal distribution. The development of a firing state through neural layers does not converge to a certain fixed point but keeps on fluctuating.