Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-6 of 6
Shin Ishii
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (7): 2049–2081.
Published: 01 July 2009
FIGURES
| View All (6)
Abstract
View article
PDF
In this letter, we present new methods of multiclass classification that combine multiple binary classifiers. Misclassification of each binary classifier is formulated as a bit inversion error with probabilistic models by making an analogy to the context of information transmission theory. Dependence between binary classifiers is incorporated into our model, which makes a decoder a type of Boltzmann machine. We performed experimental studies using a synthetic data set, data sets from the UCI repository, and bioinformatics data sets, and the results show that the proposed methods are superior to the existing multiclass classification methods.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (11): 3051–3087.
Published: 01 November 2007
Abstract
View article
PDF
Games constitute a challenging domain of reinforcement learning (RL) for acquiring strategies because many of them include multiple players and many unobservable variables in a large state space. The difficulty of solving such realistic multiagent problems with partial observability arises mainly from the fact that the computational cost for the estimation and prediction in the whole state space, including unobservable variables, is too heavy. To overcome this intractability and enable an agent to learn in an unknown environment, an effective approximation method is required with explicit learning of the environmental model. We present a model-based RL scheme for large-scale multiagent problems with partial observability and apply it to a card game, hearts. This game is a well-defined example of an imperfect information game and can be approximately formulated as a partially observable Markov decision process (POMDP) for a single learning agent. To reduce the computational cost, we use a sampling technique in which the heavy integration required for the estimation and prediction can be approximated by a plausible number of samples. Computer simulation results show that our method is effective in solving such a difficult, partially observable multiagent problem.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (1): 115–144.
Published: 01 January 2005
Abstract
View article
PDF
In this letter, we propose a noisy nonlinear version of independent component analysis (ICA). Assuming that the probability density function (p.d.f.) of sources is known, a learning rule is derived based on maximum likelihood estimation (MLE). Our model involves some algorithms of noisy linear ICA (e.g., Bermond & Cardoso, 1999) or noise-free nonlinear ICA (e.g., Lee, Koehler, & Orglmeister, 1997) as special cases. Especially when the nonlinear function is linear, the learning rule derived as a generalized expectation-maximization algorithm has a similar form to the noisy ICA algorithm previously presented by Douglas, Cichocki, and Amari (1998). Moreover, our learning rule becomes identical to the standard noise-free linear ICA algorithm in the noiseless limit, while existing MLE-based noisy ICA algorithms do not rigorously include the noise-free ICA. We trained our noisy nonlinear ICA by using acoustic signals such as speech and music. The model after learning successfully simulates virtual pitch phenomena, and the existence region of virtual pitch is qualitatively similar to that observed in a psychoacoustic experiment. Although a linear transformation hypothesized in the central auditory system can account for the pitch sensation, our model suggests that the linear transformation can be acquired through learning from actual acoustic signals. Since our model includes a cepstrum analysis in a special case, it is expected to provide a useful feature extraction method that has often been given by the cepstrum analysis.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (12): 2763–2797.
Published: 01 December 2001
Abstract
View article
PDF
This article presents a new theoretical framework to consider the dynamics of a stochastic spiking neuron model with general membrane response to input spike. We assume that the input spikes obey an inhomogeneous Poisson process. The stochastic process of the membrane potential then becomes a gaussian process. When a general type of the membrane response is assumed, the stochastic process becomes a Markov-gaussian process. We present a calculation method for the membrane potential density and the firing probability density. Our new formulation is the extension of the existing formulation based on diffusion approximation. Although the single Markov assumption of the diffusion approximation simplifies the stochastic process analysis, the calculation is inaccurate when the stochastic process involves a multiple Markov property. We find that the variation of the shape of the membrane response, which has often been ignored in existing stochastic process studies, significantly affects the firing probability. Our approach can consider the reset effect, which has been difficult to deal with by analysis based on the first passage time density.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2000) 12 (9): 2209–2225.
Published: 01 September 2000
Abstract
View article
PDF
In this article, we propose new analog neural approaches to combinatorial optimization problems, in particular, quadratic assignment problems (QAPs). Our proposed methods are based on an analog version of the λ-opt heuristics, which simultaneously changes assignments for λ elements in a permutation. Since we can take a relatively large λ value, our new methods can achieve a middle-range search over possible solutions, and this helps the system neglect shallow local minima and escape from local minima. In experiments, we have applied our methods to relatively large-scale ( N = 80–150) QAPs. Results have shown that our new methods are comparable to the present champion algorithms; for two benchmark problems, they are obtain better solutions than the previous champion algorithms.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2000) 12 (2): 407–432.
Published: 01 February 2000
Abstract
View article
PDF
A normalized gaussian network (NGnet) (Moody & Darken, 1989) is a network of local linear regression units. The model softly partitions the input space by normalized gaussian functions, and each local unit linearly approximates the output within the partition. In this article, we propose a new on-line EM algorithm for the NGnet, which is derived from the batch EM algorithm (Xu, Jordan, & Hinton 1995), by introducing a discount factor. We show that the on-line EM algorithm is equivalent to the batch EM algorithm if a specific scheduling of the discount factor is employed. In addition, we show that the on-line EM algorithm can be considered as a stochastic approximation method to find the maximum likelihood estimator. A new regularization method is proposed in order to deal with a singular input distribution. In order to manage dynamic environments, where the input-output distribution of data changes over time, unit manipulation mechanisms such as unit production, unit deletion, and unit division are also introduced based on probabilistic interpretation. Experimental results show that our approach is suitable for function approximation problems in dynamic environments. We also apply our on-line EM algorithm to robot dynamics problems and compare our algorithm with the mixtures-of-experts family.