Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-11 of 11
Hiroyuki Nakahara
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (12): 3062–3106.
Published: 01 December 2010
FIGURES
| View All (12)
Abstract
View article
PDF
The temporal difference (TD) learning framework is a major paradigm for understanding value-based decision making and related neural activities (e.g., dopamine activity). The representation of time in neural processes modeled by a TD framework, however, is poorly understood. To address this issue, we propose a TD formulation that separates the time of the operator (neural valuation processes), which we refer to as internal time, from the time of the observer (experiment), which we refer to as conventional time. We provide the formulation and theoretical characteristics of this TD model based on internal time, called internal-time TD, and explore the possible consequences of the use of this model in neural value-based decision making. Due to the separation of the two times, internal-time TD computations, such as TD error, are expressed differently, depending on both the time frame and time unit. We examine this operator-observer problem in relation to the time representation used in previous TD models. An internal time TD value function exhibits the co-appearance of exponential and hyperbolic discounting at different delays in intertemporal choice tasks. We further examine the effects of internal time noise on TD error, the dynamic construction of internal time, and the modulation of internal time with the internal time hypothesis of serotonin function. We also relate the internal TD formulation to research on interval timing and subjective time.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2006) 18 (6): 1259–1267.
Published: 01 June 2006
Abstract
View article
PDF
The decoding scheme of a stimulus can be different from the stochastic encoding scheme in the neural population coding. The stochastic fluctuations are not independent in general, but an independent version could be used for the ease of decoding. How much information is lost by using this unfaithful model for decoding? There are discussions concerning loss of information (Nirenberg & Latham, 2003; Schneidman, Bialek, & Berry, 2003). We elucidate the Nirenberg-Latham loss from the point of view of information geometry.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2006) 18 (3): 545–568.
Published: 01 March 2006
Abstract
View article
PDF
In examining spike trains, different models are used to describe their structure. The different models often seem quite similar, but because they are cast in different formalisms, it is often difficult to compare their predictions. Here we use the information-geometric measure, an orthogonal coordinate representation of point processes, to express different models of stochastic point processes in a common coordinate system. Within such a framework, it becomes straightforward to visualize higher-order correlations of different models and thereby assess the differences between models. We apply the information-geometric measure to compare two similar but not identical models of neuronal spike trains: the inhomogeneous Markov and the mixture of Poisson models. It is shown that they differ in the secondand higher-order interaction terms. In the mixture of Poisson model, the second- and higher-order interactions are of comparable magnitude within each order, whereas in the inhomogeneous Markov model, they have alternating signs over different orders. This provides guidance about what measurements would effectively separate the two models. As newer models are proposed, they also can be compared to these models using information geometry.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (4): 839–858.
Published: 01 April 2005
Abstract
View article
PDF
Fisher information has been used to analyze the accuracy of neural population coding. This works well when the Fisher information does not degenerate, but when two stimuli are presented to a population of neurons, a singular structure emerges by their mutual interactions. In this case, the Fisher information matrix degenerates, and the regularity condition ensuring the Cramér-Rao paradigm of statistics is violated. An animal shows pathological behavior in such a situation. We present a novel method of statistical analysis to understand information in population coding in which algebraic singularity plays a major role. The method elucidates the nature of the pathological case by calculating the Fisher information. We then suggest that synchronous firing can resolve singularity and show a method of analyzing the binding problem in terms of the Fisher information. Our method integrates a variety of disciplines in population coding, such as nonregular statistics, Bayesian statistics, singularity in algebraic geometry, and synchronous firing, under the theme of Fisher information.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (1): 127–142.
Published: 01 January 2003
Abstract
View article
PDF
The stochastic mechanism of synchronous firing in a population of neurons is studied from the point of view of information geometry. Higher-order interactions of neurons, which cannot be reduced to pairwise correlations, are proved to exist in synchronous firing. In a neuron pool where each neuron fires stochastically, the probability distribution q(r) of the activity r , which is the fraction of firing neurons in the pool, is studied. When q(r) has a widespread distribution, in particular, when q(r) has two peaks, the neurons fire synchronously at one time and are quiescent at other times. The mechanism of generating such a probability distribution is interesting because the activity r is concentrated on its mean value when each neuron fires independently, because of the law of large numbers. Even when pairwise interactions, or third-order interactions, exist, the concentration is not resolved. This shows that higher-order interactions are necessary to generate widespread activity distributions. We analyze a simple model in which neurons receive common overlapping inputs and prove that such a model can have a widespread distribution of activity, generating higher-order stochastic interactions.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (10): 2269–2316.
Published: 01 October 2002
Abstract
View article
PDF
This study introduces information-geometric measures to analyze neural firing patterns by taking not only the second-order but also higher-order interactions among neurons into account. Information geometry provides useful tools and concepts for this purpose, including the orthogonality of coordinate parameters and the Pythagoras relation in the Kullback-Leibler divergence. Based on this orthogonality, we show a novel method for analyzing spike firing patterns by decomposing the interactions of neurons of various orders. As a result, purely pairwise, triple-wise, and higher-order interactions are singled out. We also demonstrate the benefits of our proposal by using several examples.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (5): 999–1026.
Published: 01 May 2002
Abstract
View article
PDF
This study uses a neural field model to investigate computational aspects of population coding and decoding when the stimulus is a single variable. A general prototype model for the encoding process is proposed, in which neural responses are correlated, with strength specified by a gaussian function of their difference in preferred stimuli. Based on the model, we study the effect of correlation on the Fisher information, compare the performances of three decoding methods that differ in the amount of encoding information being used, and investigate the implementation of the three methods by using a recurrent network. This study not only re-discovers main results in existing literatures in a unified way, but also reveals important new features, especially when the neural correlation is strong. As the neural correlation of firing becomes larger, the Fisher information decreases drastically. We confirm that as the width of correlation increases, the Fisher information saturates and no longer increases in proportion to the number of neurons. However, we prove that as the width increases further—wider than p2 times the effective width of the turning function—the Fisher information increases again, and it increases without limit in proportion to the number of neurons. Furthermore, we clarify the asymptotic efficiency of the maximum likelihood inference (MLI) type of decoding methods for correlated neural signals. It shows that when the correlation covers a nonlocal range of population (excepting the uniform correlation and when the noise is extremely small), the MLI type of method, whose decoding error satisfies the Cauchy-type distribution, is not asymptotically efficient. This implies that the variance is no longer adequate to measure decoding accuracy.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (4): 819–844.
Published: 01 April 2002
Abstract
View article
PDF
Self-organization is one of fundamental brain computations for forming efficient representations of information. Experimental support for this idea has been largely limited to the developmental and reorganizational formation of neural circuits in the sensory cortices. We now propose that self-organization may also play an important role in short-term synaptic changesinreward-drivenvoluntarybehaviors.Ithasrecentlybeenshown that many neurons in the basal ganglia change their sensory responses flexibly in relation to rewards. Our computational model proposes that the rapid changes in striatal projection neurons depend on the subtle balance between the Hebb-type mechanisms of excitation and inhibition, which are modulated by reinforcement signals. Simulations based on the model are shown to produce various types of neural activity similar to those found in experiments.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (9): 2031–2047.
Published: 01 September 2001
Abstract
View article
PDF
This study investigates the influence of attention modulation on neural tuning functions. It has been shown in experiments that attention modulation alters neural tuning curves. Attention has been considered at least to serve to resolve limiting capacities and to increase the sensitivity to attended stimulus, while the exact functions of attention are still under debate. Inspired by recent experimental results on attention modulation, we investigate the influence of changes in the height and base rate of the tuning curve on the encoding accuracy, using the Fisher information. Under an assumption of stimulus-conditional independence of neural responses, we derive explicit conditions that determine when the height and base rate should be increased or decreased to improve encoding accuracy. Notably, a decrease in the tuning height and base rate can improve the encoding accuracy in some cases. Our theoretical results can predict the effective size of attention modulation on the neural population with respect to encoding accuracy. We discuss how our method can be used quantitatively to evaluate different aspects of attention function.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (4): 775–797.
Published: 01 April 2001
Abstract
View article
PDF
This study investigates a population decoding paradigm in which the maximum likelihood inference is based on an unfaithful decoding model (UMLI). This is usually the case for neural population decoding because the encoding process of the brain is not exactly known or because a simplified decoding model is preferred for saving computational cost. We consider an unfaithful decoding model that neglects the pair-wise correlation between neuronal activities and prove that UMLI is asymptotically efficient when the neuronal correlation is uniform or of limited range. The performance of UMLI is compared with that of the maximum likelihood inference based on the faithful model and that of the center-of-mass decoding method. It turns out that UMLI has advantages of decreasing the computational complexity remarkably and maintaining high-leveldecoding accuracy. Moreover, it can be implemented by a biologically feasible recurrent network (Pouget, Zhang, Deneve, & Latham, 1998). The effect of correlation on the decoding accuracy is also discussed.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1998) 10 (1): 113–132.
Published: 01 January 1998
Abstract
View article
PDF
In consideration of working memory as a means for goal-directed behavior in nonstationary environments, we argue that the dynamics of working memory should satisfy two opposing demands: long-term maintenance and quick transition. These two characteristics are contradictory within the linear domain. We propose the near-saddle-node bifurcation behavior of a sigmoidal unit with a self-connection as a candidate of the dynamical mechanism that satisfies both of these demands. It is shown in evolutionary programming experiments that the near-saddle-node bifurcation behavior can be found in recurrent networks optimized for a task that requires efficient use of working memory. The result suggests that the near-saddle-node bifurcation behavior may be a functional necessity for survival in nonstationary environments.