Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-3 of 3
Kazushi Ikeda
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (12): 2719–2735.
Published: 01 December 2005
Abstract
View article
PDF
An information geometrical method is developed for characterizing or classifying neurons in cortical areas, whose spike rates fluctuate in time. Under the assumption that the interspike intervals of a spike sequence of a neuron obey a gamma process with a time-variant spike rate and a fixed shape parameter, we formulate the problem of characterization as a semiparametric statistical estimation, where the spike rate is a nuisance parameter. We derive optimal criteria from the information geometrical viewpoint when certain assumptions are added to the formulation, and we show that some existing measures, such as the coefficient of variation and the local variation, are expressed as estimators of certain functions under the same assumptions.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (11): 2508–2529.
Published: 01 November 2005
Abstract
View article
PDF
By employing the L 1 or L ∞ norms in maximizing margins, support vector machines (SVMs) result in a linear programming problem that requires a lower computational load compared to SVMs with the L 2 norm. However, how the change of norm affects the generalization ability of SVMs has not been clarified so far except for numerical experiments. In this letter, the geometrical meaning of SVMs with the L p norm is investigated, and the SVM solutions are shown to have rather little dependency on p .
Journal Articles
Publisher: Journals Gateway
Neural Computation (2004) 16 (8): 1705–1719.
Published: 01 August 2004
Abstract
View article
PDF
The generalization properties of learning classifiers with a polynomial kernel function are examined. In kernel methods, input vectors are mapped into a high-dimensional feature space where the mapped vectors are linearly separated. It is well-known that a linear dichotomy has an average generalization error or a learning curve proportional to the dimension of the input space and inversely proportional to the number of given examples in the asymptotic limit. However, it does not hold in the case of kernel methods since the feature vectors lie on a submanifold in the feature space, called the input surface. In this letter, we discuss how the asymptotic average generalization error depends on the relationship between the input surface and the true separating hyperplane in the feature space where the essential dimension of the true separating polynomial, named the class, is important. We show its upper bounds in several cases and confirm these using computer simulations.