Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-5 of 5
Simon Haykin
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2014) 26 (2): 377–420.
Published: 01 February 2014
FIGURES
| View All (12)
Abstract
View article
PDF
Sparse coding has established itself as a useful tool for the representation of natural data in the neuroscience as well as signal-processing literature. The aim of this letter, inspired by the human brain, is to improve on the performance of the sparse coding algorithm by trying to bridge the gap between neuroscience and engineering. To this end, we build on the localized perception-action cycle in cognitive neuroscience by categorizing it under the umbrella of perceptual attention, which lends itself to increase gradually the contrast between relevant information and irrelevant information. Stated in another way, irrelevant information is filtered away, while relevant information about the environment is enhanced from one cycle to the next. We may thus think in terms of the information filter, which, in a Bayesian context, was introduced in the literature by Fraser ( 1967 ). In a Bayesian context, the information filter provides a method for algorithmic implementation of perceptual attention. The information filter may therefore be viewed as the basis for improving the algorithmic performance of sparse coding. To support this performance improvement, the letter presents two computer experiments. The first experiment uses simulated (real-valued) data that are generated to purposely make the problem challenging. The second uses real-life radar data that are complex valued, hence the proposal to introduce Wirtinger calculus into derivation of the new algorithm.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (12): 2648–2671.
Published: 01 December 2005
Abstract
View article
PDF
We propose a novel model-based hearing compensation strategy and gradient-free optimization procedure for a learning-based hearing aid design. Motivated by physiological data and normal and impaired auditory nerve models, a hearing compensation strategy is cast as a neural coding problem, and a Neurocompensator is designed to compensate for the hearing loss and enhance the speech. With the goal of learning the Neurocompensator parameters, we use a gradient-free optimization procedure, an improved version of the ALOPEX that we have developed (Haykin, Chen, & Becker, 2004), to learn the unknown parameters of the Neurocompensator. We present our methodology, learning procedure, and experimental results in detail; discussion is also given regarding the unsupervised learning and optimization methods.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (9): 1875–1902.
Published: 01 September 2005
Abstract
View article
PDF
This review presents an overview of a challenging problem in auditory perception, the cocktail party phenomenon, the delineation of which goes back to a classic paper by Cherry in 1953. In this review, we address the following issues: (1) human auditory scene analysis, which is a general process carried out by the auditory system of a human listener; (2) insight into auditory perception, which is derived from Marr's vision theory; (3) computational auditory scene analysis, which focuses on specific approaches aimed at solving the machine cocktail party problem; (4) active audition, the proposal for which is motivated by analogy with active vision, and (5) discussion of brain theory and independent component analysis, on the one hand, and correlative neural firing, on the other.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (12): 2791–2846.
Published: 01 December 2002
Abstract
View article
PDF
This review provides a comprehensive understanding of regularization theory from different perspectives, emphasizing smoothness and simplicity principles. Using the tools of operator theory and Fourier analysis, it is shown that the solution of the classical Tikhonov regularization problem can be derived from the regularized functional defined by a linear differential (integral) operator in the spatial (Fourier) domain. State-ofthe-art research relevant to the regularization theory is reviewed, covering Occam's razor, minimum length description, Bayesian theory, pruning algorithms, informational (entropy) theory, statistical learning theory, and equivalent regularization. The universal principle of regularization in terms of Kolmogorov complexity is discussed. Finally, some prospective studies on regularization theory and beyond are suggested.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1993) 5 (6): 928–938.
Published: 01 November 1993
Abstract
View article
PDF
In this paper we observe that a particular class of rational function (RF) approximations may be viewed as feedforward networks. Like the radial basis function (RBF) network, the training of the RF network may be performed using a linear adaptive filtering algorithm. We illustrate the application of the RF network by considering two nonlinear signal processing problems. The first problem concerns the one-step prediction of a time series consisting of a pair of complex sinusoid in the presence of colored non-gaussian noise. Simulated data were used for this problem. In the second problem, we use the RF network to build a nonlinear dynamic model of sea clutter (radar backscattering from a sea surface); here, real-life data were used for the study.