Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
Richard H. R. Hahnloser
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2015) 27 (10): 2231–2259.
Published: 01 October 2015
FIGURES
| View All (28)
Abstract
View article
PDF
This letter addresses the problem of separating two speakers from a single microphone recording. Three linear methods are tested for source separation, all of which operate directly on sound spectrograms: (1) eigenmode analysis of covariance difference to identify spectro-temporal features associated with large variance for one source and small variance for the other source; (2) maximum likelihood demixing in which the mixture is modeled as the sum of two gaussian signals and maximum likelihood is used to identify the most likely sources; and (3) suppression-regression, in which autoregressive models are trained to reproduce one source and suppress the other. These linear approaches are tested on the problem of separating a known male from a known female speaker. The performance of these algorithms is assessed in terms of the residual error of estimated source spectrograms, waveform signal-to-noise ratio, and perceptual evaluation of speech quality scores. This work shows that the algorithms compare favorably to nonlinear approaches such as nonnegative sparse coding in terms of simplicity, performance, and suitability for real-time implementations, and they provide benchmark solutions for monaural source separation tasks.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (3): 621–638.
Published: 01 March 2003
Abstract
View article
PDF
The richness and complexity of recurrent cortical circuits is an inexhaustible source of inspiration for thinking about high-level biological computation. In past theoretical studies, constraints on the synaptic connection patterns of threshold-linear networks were found that guaranteed bounded network dynamics, convergence to attractive fixed points, and multistability, all fundamental aspects of cortical information processing. However, these conditions were only sufficient, and it remained unclear which were the minimal (necessary) conditions for convergence and multistability. We show that symmetric threshold-linear networks converge to a set of attractive fixed points if and only if the network matrix is copositive. Furthermore, the set of attractive fixed points is nonconnected (the network is multiattractive) if and only if the network matrix is not positive semidefinite. There are permitted sets of neurons that can be coactive at a stable steady state and forbidden sets that cannot. Permitted sets are clustered in the sense that subsets of permitted sets are permitted and supersets of forbidden sets are forbidden. By viewing permitted sets as memories stored in the synaptic connections, we provide a formulation of long-term memory that is more general than the traditional perspective of fixed-point attractor networks. There is a close correspondence between threshold-linear networks and networks defined by the generalized Lotka-Volterra equations.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (11): 2627–2646.
Published: 01 November 2002
Abstract
View article
PDF
Winner-take-all networks have been proposed to underlie many of the brain's fundamental computational abilities. However, notmuchisknown about how to extend the grouping of potential winners in these networks beyond single neuron or uniformly arranged groups of neurons. We show that competition between arbitrary groups of neurons can be realized by organizing lateral inhibition in linear threshold networks. Given a collection of potentially overlapping groups (with the exception of some degenerate cases), the lateral inhibition results in network dynamics such that any permitted set of neurons that can be coactivated by some input at a stable steady state is contained in one of the groups. The information about the input is preserved in this operation. The activity level of a neuron in a permitted set corresponds to its stimulus strength, amplified by some constant. Sets of neurons that are not part of a group cannot be coactivated by any input at a stable steady state. We analyze the storage capacity of such a network for random groups—the number of random groups the network can store as permitted sets without creating too many spurious ones. In this framework, we calculate the optimal sparsity of the groups (maximizing group entropy). We find that for dense inputs, the optimal sparsity is unphysiologically small. However, when the inputs and the groups are equally sparse, we derive a more plausible optimal sparsity. We believe our results are the first steps toward attractor theories in hybrid analog-digital networks.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (7): 1669–1689.
Published: 01 July 2002
Abstract
View article
PDF
There is strong anatomical and physiological evidence that neurons with large receptive fields located in higher visual areas are recurrently connected to neurons with smaller receptive fields in lower areas. We have previously described a minimal neuronal network architecture in which top-down attentional signals to large receptive field neurons can bias and selectively read out the bottom-up sensory information to small receptive field neurons (Hahnloser, Douglas, Mahowald, & Hepp, 1999). Here we study an enhanced model, where the role of attention is to recruit specific inter-areal feedback loops (e.g., drive neurons above firing threshold). We first illustrate the operation of recruitment on a simple example of visual stimulus selection. In the subsequent analysis, we find that attentional recruitment operates by dynamical modulation of signal amplification and response multistability. In particular, we find that attentional stimulus selection necessitates increased recruitment when the stimulus to be selected is of small contrast and of small distance away from distractor stimuli. The selectability of a low-contrast stimulus is dependent on the gain of attentional effects; for example, low-contrast stimuli can be selected only when attention enhances neural responses. However, the dependence of attentional selection on stimulus-distractor distance is not contingent on whether attention enhances or suppresses responses. The computational implications of attentional recruitment are that cortical circuits can behave as winner-take-all mechanisms of variable strength and can achieve close to optimal signal discrimination in the presence of external noise.