Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-9 of 9
Shigeru Shinomoto
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (4): 854–876.
Published: 01 April 2013
FIGURES
| View All (19)
Abstract
View article
PDF
In many cortical areas, neural spike trains do not follow a Poisson process. In this study, we investigate a possible benefit of non-Poisson spiking for information transmission by studying the minimal rate fluctuation that can be detected by a Bayesian estimator. The idea is that an inhomogeneous Poisson process may make it difficult for downstream decoders to resolve subtle changes in rate fluctuation, but by using a more regular non-Poisson process, the nervous system can make rate fluctuations easier to detect. We evaluate the degree to which regular firing reduces the rate fluctuation detection threshold. We find that the threshold for detection is reduced in proportion to the coefficient of variation of interspike intervals.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2011) 23 (12): 3125–3144.
Published: 01 December 2011
FIGURES
| View All (5)
Abstract
View article
PDF
The time histogram is a fundamental tool for representing the inhomogeneous density of event occurrences such as neuronal firings. The shape of a histogram critically depends on the size of the bins that partition the time axis. In most neurophysiological studies, however, researchers have arbitrarily selected the bin size when analyzing fluctuations in neuronal activity. A rigorous method for selecting the appropriate bin size was recently derived so that the mean integrated squared error between the time histogram and the unknown underlying rate is minimized (Shimazaki & Shinomoto, 2007 ). This derivation assumes that spikes are independently drawn from a given rate. However, in practice, biological neurons express non-Poissonian features in their firing patterns, such that the spike occurrence depends on the preceding spikes, which inevitably deteriorate the optimization. In this letter, we revise the method for selecting the bin size by considering the possible non-Poissonian features. Improvement in the goodness of fit of the time histogram is assessed and confirmed by numerically simulated non-Poissonian spike trains derived from the given fluctuating rate. For some experimental data, the revised algorithm transforms the shape of the time histogram from the Poissonian optimization method.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2011) 23 (12): 3070–3093.
Published: 01 December 2011
FIGURES
| View All (24)
Abstract
View article
PDF
The set of firing rates of the presynaptic excitatory and inhibitory neurons constitutes the input signal to the postsynaptic neuron. Estimation of the time-varying input rates from intracellularly recorded membrane potential is investigated here. For that purpose, the membrane potential dynamics must be specified. We consider the Ornstein-Uhlenbeck stochastic process, one of the most common single-neuron models, with time-dependent mean and variance. Assuming the slow variation of these two moments, it is possible to formulate the estimation problem by using a state-space model. We develop an algorithm that estimates the paths of the mean and variance of the input current by using the empirical Bayes approach. Then the input firing rates are directly available from the moments. The proposed method is applied to three simulated data examples: constant signal, sinusoidally modulated signal, and constant signal with a jump. For the constant signal, the estimation performance of the method is comparable to that of the traditionally applied maximum likelihood method. Further, the proposed method accurately estimates both continuous and discontinuous time-variable signals. In the case of the signal with a jump, which does not satisfy the assumption of slow variability, the robustness of the method is verified. It can be concluded that the method provides reliable estimates of the total input firing rates, which are not experimentally measurable.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (7): 1931–1951.
Published: 01 July 2009
FIGURES
| View All (6)
Abstract
View article
PDF
Cortical neurons in vivo had been regarded as Poisson spike generators that convey no information other than the rate of random firing. Recently, using a metric for analyzing local variation of interspike intervals, researchers have found that individual neurons express specific patterns in generating spikes, which may symbolically be termed regular, random, or bursty, rather invariantly in time. In order to study the dynamics of firing patterns in greater detail, we propose here a Bayesian method for estimating firing irregularity and the firing rate simultaneously for a given spike sequence, and we implement an algorithm that may render the empirical Bayesian estimation practicable for data comprising a large number of spikes. Application of this method to electrophysiological data revealed a subtle correlation between the degree of firing irregularity and the firing rate for individual neurons. Irregularity of firing did not deviate greatly around the low degree of dependence on the firing rate and remained practically unchanged for individual neurons in the cortical areas V1 and MT, whereas it fluctuated greatly in the lateral geniculate nucleus of the thalamus. This indicates the presence and absence of autocontrolling mechanisms for maintaining patterns of firing in the cortex and thalamus, respectively.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (6): 1503–1527.
Published: 01 June 2007
Abstract
View article
PDF
The time histogram method is the most basic tool for capturing a time dependent rate of neuronal spikes. Generally in the neurophysiological literature, the bin size that critically determines the goodness of the fit of the time histogram to the underlying spike rate has been subjectively selected by individual researchers. Here, we propose a method for objectively selecting the bin size from the spike count statistics alone, so that the resulting bar or line graph time histogram best represents the unknown underlying spike rate. For a small number of spike sequences generated from a modestly fluctuating rate, the optimal bin size may diverge, indicating that any time histogram is likely to capture a spurious rate. Given a paucity of data, the method presented here can nevertheless suggest how many experimental trials should be added in order to obtain a meaningful time-dependent histogram with the required accuracy.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (12): 2823–2842.
Published: 01 December 2003
Abstract
View article
PDF
Spike sequences recorded from four cortical areas of an awake behaving monkey were examined to explore characteristics that vary among neurons. We found that a measure of the local variation of interspike intervals, L V , is nearly the same for every spike sequence for any given neuron, while it varies significantly among neurons. The distributions of L V values for neuron ensembles in three of the four areas were found to be distinctly bimodal. Two groups of neurons classified according to the spiking irregularity exhibit different responses to the same stimulus. This suggests that neurons in each area can be classified into different groups possessing unique spiking statistics and corresponding functional properties.
Journal Articles
The Ornstein-Uhlenbeck Process Does Not Reproduce Spiking Statistics of Neurons in Prefrontal Cortex
Publisher: Journals Gateway
Neural Computation (1999) 11 (4): 935–951.
Published: 15 May 1999
Abstract
View article
PDF
Cortical neurons of behaving animals generate irregular spike sequences. Recently, there has been a heated discussion about the origin of this irregularity. Softky and Koch (1993) pointed out the inability of standard single-neuron models to reproduce the irregularity of the observed spike sequences when the model parameters are chosen within a certain range that they consider to be plausible. Shadlen and Newsome (1994), on the other hand, demonstrated that a standard leaky integrate-and-fire model can reproduce the irregularity if the inhibition is balanced with the excitation. Motivated by this discussion, we attempted to determine whether the Ornstein-Uhlenbeck process, which is naturally derived from the leaky integration assumption, can in fact reproduce higher-order statistics of biological data. For this purpose, we consider actual neuronal spike sequences recorded from the monkey prefrontal cortex to calculate the higher-order statistics of the interspike intervals. Consistency of the data with the model is examined on the basis of the coefficient of variation and the skewness coefficient, which are, respectively, a measure of the spiking irregularity and a measure of the asymmetry of the interval distribution. It is found that the biological data are not consistent with the model if the model time constant assumes a value within a certain range believed to cover all reasonable values. This fact suggests that the leaky integrate-and-fire model with the assumption of uncorrelated inputs is not adequate to account for the spiking in at least some cortical neurons.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1995) 7 (1): 158–172.
Published: 01 January 1995
Abstract
View article
PDF
Even if it is not possible to reproduce a target input-output relation, a learning machine should be able to minimize the probability of making errors. A practical learning algorithm should also be simple enough to go without memorizing example data, if possible. Incremental algorithms such as error backpropagation satisfy this requirement. We propose incremental algorithms that provide fast convergence of the machine parameter θ to its optimal choice θ o with respect to the number of examples t . We will consider the binary choice model whose target relation has a blurred boundary and the machine whose parameter θ specifies a decision boundary to make the output prediction. The question we wish to address here is how fast θ can approach θ o , depending upon whether in the learning stage the machine can specify inputs as queries to the target relation, or the inputs are drawn from a certain distribution. If queries are permitted, the machine can achieve the fastest convergence, (θ - θ o ) 2 ∼ O (t −1 ). If not, O (t −1 ) convergence is generally not attainable. For learning without queries, we showed in a previous paper that the error minimum algorithm exhibits a slow convergence (θ - θ o ) 2 ∼ O (t −2/3 ). We propose here a practical algorithm that provides a rather fast convergence, O ( t −4/5 ). It is possible to further accelerate the convergence by using more elaborate algorithms. The fastest convergence turned out to be O [(ln t ) 2 t −1 ]. This scaling is considered optimal among possible algorithms, and is not due to the incremental nature of our algorithm.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1992) 4 (4): 605–618.
Published: 01 July 1992
Abstract
View article
PDF
If machines are learning to make decisions given a number of examples, the generalization error ε( t ) is defined as the average probability that an incorrect decision is made for a new example by a machine when trained with t examples. The generalization error decreases as t increases, and the curve ε( t ) is called a learning curve. The present paper uses the Bayesian approach to show that given the annealed approximation, learning curves can be classified into four asymptotic types. If the machine is deterministic with noiseless teacher signals, then (1) ε ∼ at -1 when the correct machine parameter is unique, and (2) ε ∼ at -2 when the set of the correct parameters has a finite measure. If the teacher signals are noisy, then (3) ε ∼ at -1/2 for a deterministic machine, and (4) ε ∼ c + at -1 for a stochastic machine.