Skip Nav Destination
Close Modal
Update search
NARROW
Date
Availability
1-6 of 6
Takuya Isomura
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2021) 33 (6): 1433–1468.
Published: 13 May 2021
FIGURES
Abstract
View article
PDF
For many years, a combination of principal component analysis (PCA) and independent component analysis (ICA) has been used for blind source separation (BSS). However, it remains unclear why these linear methods work well with real-world data that involve nonlinear source mixtures. This work theoretically validates that a cascade of linear PCA and ICA can solve a nonlinear BSS problem accurately—when the sensory inputs are generated from hidden sources via nonlinear mappings with sufficient dimensionality. Our proposed theorem, termed the asymptotic linearization theorem, theoretically guarantees that applying linear PCA to the inputs can reliably extract a subspace spanned by the linear projections from every hidden source as the major components—and thus projecting the inputs onto their major eigenspace can effectively recover a linear transformation of the hidden sources. Then subsequent application of linear ICA can separate all the true independent hidden sources accurately. Zero-element-wise-error nonlinear BSS is asymptotically attained when the source dimensionality is large and the input dimensionality is sufficiently larger than the source dimensionality. Our proposed theorem is validated analytically and numerically. Moreover, the same computation can be performed by using Hebbian-like plasticity rules, implying the biological plausibility of this nonlinear BSS strategy. Our results highlight the utility of linear PCA and ICA for accurately and reliably recovering nonlinearly mixed sources and suggest the importance of employing sensors with sufficient dimensionality to identify true hidden sources of real-world data.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2020) 32 (11): 2187–2211.
Published: 01 November 2020
Abstract
View article
PDF
Recent remarkable advances in experimental techniques have provided a background for inferring neuronal couplings from point process data that include a great number of neurons. Here, we propose a systematic procedure for pre- and postprocessing generic point process data in an objective manner to handle data in the framework of a binary simple statistical model, the Ising or generalized McCulloch–Pitts model. The procedure has two steps: (1) determining time bin size for transforming the point process data into discrete-time binary data and (2) screening relevant couplings from the estimated couplings. For the first step, we decide the optimal time bin size by introducing the null hypothesis that all neurons would fire independently, then choosing a time bin size so that the null hypothesis is rejected with the strict criteria. The likelihood associated with the null hypothesis is analytically evaluated and used for the rejection process. For the second postprocessing step, after a certain estimator of coupling is obtained based on the preprocessed data set (any estimator can be used with the proposed procedure), the estimate is compared with many other estimates derived from data sets obtained by randomizing the original data set in the time direction. We accept the original estimate as relevant only if its absolute value is sufficiently larger than those of randomized data sets. These manipulations suppress false positive couplings induced by statistical noise. We apply this inference procedure to spiking data from synthetic and in vitro neuronal networks. The results show that the proposed procedure identifies the presence or absence of synaptic couplings fairly well, including their signs, for the synthetic and experimental data. In particular, the results support that we can infer the physical connections of underlying systems in favorable situations, even when using a simple statistical model.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2020) 32 (11): 2085–2121.
Published: 01 November 2020
Abstract
View article
PDF
This letter considers a class of biologically plausible cost functions for neural networks, where the same cost function is minimized by both neural activity and plasticity. We show that such cost functions can be cast as a variational bound on model evidence under an implicit generative model. Using generative models based on partially observed Markov decision processes (POMDP), we show that neural activity and plasticity perform Bayesian inference and learning, respectively, by maximizing model evidence. Using mathematical and numerical analyses, we establish the formal equivalence between neural network cost functions and variational free energy under some prior beliefs about latent states that generate inputs. These prior beliefs are determined by particular constants (e.g., thresholds) that define the cost function. This means that the Bayes optimal encoding of latent or hidden states is achieved when the network's implicit priors match the process that generates its inputs. This equivalence is potentially important because it suggests that any hyperparameter of a neural network can itself be optimized—by minimization with respect to variational free energy. Furthermore, it enables one to characterize a neural network formally, in terms of its prior beliefs.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2019) 31 (12): 2390–2431.
Published: 01 December 2019
FIGURES
| View All (5)
Abstract
View article
PDF
To exhibit social intelligence, animals have to recognize whom they are communicating with. One way to make this inference is to select among internal generative models of each conspecific who may be encountered. However, these models also have to be learned via some form of Bayesian belief updating. This induces an interesting problem: When receiving sensory input generated by a particular conspecific, how does an animal know which internal model to update? We consider a theoretical and neurobiologically plausible solution that enables inference and learning of the processes that generate sensory inputs (e.g., listening and understanding) and reproduction of those inputs (e.g., talking or singing), under multiple generative models. This is based on recent advances in theoretical neurobiology—namely, active inference and post hoc (online) Bayesian model selection. In brief, this scheme fits sensory inputs under each generative model. Model parameters are then updated in proportion to the probability that each model could have generated the input (i.e., model evidence). The proposed scheme is demonstrated using a series of (real zebra finch) birdsongs, where each song is generated by several different birds. The scheme is implemented using physiologically plausible models of birdsong production. We show that generalized Bayesian filtering, combined with model selection, leads to successful learning across generative models, each possessing different parameters. These results highlight the utility of having multiple internal models when making inferences in social environments with multiple sources of sensory information.
Includes: Multimedia, Supplementary data
Journal Articles
Publisher: Journals Gateway
Neural Computation (2016) 28 (9): 1859–1888.
Published: 01 September 2016
FIGURES
| View All (71)
Abstract
View article
PDF
The free-energy principle is a candidate unified theory for learning and memory in the brain that predicts that neurons, synapses, and neuromodulators work in a manner that minimizes free energy. However, electrophysiological data elucidating the neural and synaptic bases for this theory are lacking. Here, we propose a novel theory bridging the information-theoretical principle with the biological phenomenon of spike-timing dependent plasticity (STDP) regulated by neuromodulators, which we term mSTDP. We propose that by integrating an mSTDP equation, we can obtain a form of Friston’s free energy (an information-theoretical function). Then we analytically and numerically show that dopamine (DA) and noradrenaline (NA) influence the accuracy of a principal component analysis (PCA) performed using the mSTDP algorithm. From the perspective of free-energy minimization, these neuromodulatory changes alter the relative weighting or precision of accuracy and prior terms, which induces a switch from pattern completion to separation. These results are consistent with electrophysiological findings and validate the free-energy principle and mSTDP. Moreover, our scheme can potentially be applied in computational psychiatry to build models of the faulty neural networks that underlie the positive symptoms of schizophrenia, which involve abnormal DA levels, as well as models of the NA contribution to memory triage and posttraumatic stress disorder.
Journal Articles
Accurate Connection Strength Estimation Based on Variational Bayes for Detecting Synaptic Plasticity
Publisher: Journals Gateway
Neural Computation (2015) 27 (4): 819–844.
Published: 01 April 2015
FIGURES
| View All (17)
Abstract
View article
PDF
Connection strength estimation is widely used in detecting the topology of neuronal networks and assessing their synaptic plasticity. A recently proposed model-based method using the leaky integrate-and-fire model neuron estimates membrane potential from spike trains by calculating the maximum a posteriori (MAP) path. We further enhance the MAP path method using variational Bayes and dynamic causal modeling. Several simulations demonstrate that the proposed method can accurately estimate connection strengths with an error ratio of less than 20%. The results suggest that the proposed method can be an effective tool for detecting network structure and synaptic plasticity.