Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-2 of 2
Yoshiyuki Kabashima
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2020) 32 (11): 2187–2211.
Published: 01 November 2020
Abstract
View articletitled, Inferring Neuronal Couplings From Spiking Data Using a Systematic Procedure With a Statistical Criterion
View
PDF
for article titled, Inferring Neuronal Couplings From Spiking Data Using a Systematic Procedure With a Statistical Criterion
Recent remarkable advances in experimental techniques have provided a background for inferring neuronal couplings from point process data that include a great number of neurons. Here, we propose a systematic procedure for pre- and postprocessing generic point process data in an objective manner to handle data in the framework of a binary simple statistical model, the Ising or generalized McCulloch–Pitts model. The procedure has two steps: (1) determining time bin size for transforming the point process data into discrete-time binary data and (2) screening relevant couplings from the estimated couplings. For the first step, we decide the optimal time bin size by introducing the null hypothesis that all neurons would fire independently, then choosing a time bin size so that the null hypothesis is rejected with the strict criteria. The likelihood associated with the null hypothesis is analytically evaluated and used for the rejection process. For the second postprocessing step, after a certain estimator of coupling is obtained based on the preprocessed data set (any estimator can be used with the proposed procedure), the estimate is compared with many other estimates derived from data sets obtained by randomizing the original data set in the time direction. We accept the original estimate as relevant only if its absolute value is sufficiently larger than those of randomized data sets. These manipulations suppress false positive couplings induced by statistical noise. We apply this inference procedure to spiking data from synthetic and in vitro neuronal networks. The results show that the proposed procedure identifies the presence or absence of synaptic couplings fairly well, including their signs, for the synthetic and experimental data. In particular, the results support that we can infer the physical connections of underlying systems in favorable situations, even when using a simple statistical model.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1995) 7 (1): 158–172.
Published: 01 January 1995
Abstract
View articletitled, Learning a Decision Boundary from Stochastic Examples: Incremental Algorithms with and without Queries
View
PDF
for article titled, Learning a Decision Boundary from Stochastic Examples: Incremental Algorithms with and without Queries
Even if it is not possible to reproduce a target input-output relation, a learning machine should be able to minimize the probability of making errors. A practical learning algorithm should also be simple enough to go without memorizing example data, if possible. Incremental algorithms such as error backpropagation satisfy this requirement. We propose incremental algorithms that provide fast convergence of the machine parameter θ to its optimal choice θ o with respect to the number of examples t . We will consider the binary choice model whose target relation has a blurred boundary and the machine whose parameter θ specifies a decision boundary to make the output prediction. The question we wish to address here is how fast θ can approach θ o , depending upon whether in the learning stage the machine can specify inputs as queries to the target relation, or the inputs are drawn from a certain distribution. If queries are permitted, the machine can achieve the fastest convergence, (θ - θ o ) 2 ∼ O (t −1 ). If not, O (t −1 ) convergence is generally not attainable. For learning without queries, we showed in a previous paper that the error minimum algorithm exhibits a slow convergence (θ - θ o ) 2 ∼ O (t −2/3 ). We propose here a practical algorithm that provides a rather fast convergence, O ( t −4/5 ). It is possible to further accelerate the convergence by using more elaborate algorithms. The fastest convergence turned out to be O [(ln t ) 2 t −1 ]. This scaling is considered optimal among possible algorithms, and is not due to the incremental nature of our algorithm.