Skip Nav Destination
Close Modal
Update search
NARROW
Date
Availability
1-6 of 6
Reviews
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (4): 741–778.
Published: 01 April 2005
Abstract
View article
PDF
Performance in sensory discrimination tasks is commonly quantified using either information theory or ideal observer analysis. These two quantitative frameworks are often assumed to be equivalent. For example, higher mutual information is said to correspond to improved performance of an ideal observer in a stimulus estimation task. To the contrary, drawing on and extending previous results, we show that five information-theoretic quantities (entropy, response-conditional entropy, specific information, equivocation, and mutual information) violate this assumption. More positively, we show how these information measures can be used to calculate upper and lower bounds on ideal observer performance, and vice versa. The results show that the mathematical resources of ideal observer analysis are preferable to information theory for evaluating performance in a stimulus discrimination task. We also discuss the applicability of information theory to questions that ideal observer analysis cannot address.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1990) 2 (3): 261–269.
Published: 01 September 1990
Abstract
View article
PDF
We present and summarize the results from 50-, 100-, and 200-city TSP benchmarks presented at the 1989 Neural Information Processing Systems (NIPS) postconference workshop using neural network, elastic net, genetic algorithm, and simulated annealing approaches. These results are also compared with a state-of-the-art hybrid approach consisting of greedy solutions, exhaustive search, and simulated annealing.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1990) 2 (1): 1–24.
Published: 01 March 1990
Abstract
View article
PDF
We describe how to formulate matching and combinatorial problems of vision and neural network theory by generalizing elastic and deformable templates models to include binary matching elements. Techniques from statistical physics, which can be interpreted as computing marginal probability distributions, are then used to analyze these models and are shown to (1) relate them to existing theories and (2) give insight into the relations between, and relative effectivenesses of, existing theories. In particular we exploit the power of statistical techniques to put global constraints on the set of allowable states of the binary matching elements. The binary elements can then be removed analytically before minimization. This is demonstrated to be preferable to existing methods of imposing such constraints by adding bias terms in the energy functions. We give applications to winner-take-all networks, correspondence for stereo and long-range motion, the traveling salesman problem, deformable template matching, learning, content addressable memories, and models of brain development. The biological plausibility of these networks is briefly discussed.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1989) 1 (4): 425–464.
Published: 01 December 1989
Abstract
View article
PDF
The premise of this article is that learning procedures used to train artificial neural networks are inherently statistical techniques. It follows that statistical theory can provide considerable insight into the properties, advantages, and disadvantages of different network learning methods. We review concepts and analytical results from the literatures of mathematical statistics, econometrics, systems identification, and optimization theory relevant to the analysis of learning in artificial neural networks. Because of the considerable variety of available learning procedures and necessary limitations of space, we cannot provide a comprehensive treatment. Our focus is primarily on learning procedures for feedforward networks. However, many of the concepts and issues arising in this framework are also quite broadly relevant to other network learning paradigms. In addition to providing useful insights, the material reviewed here suggests some potentially useful new training methods for artificial neural networks.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1989) 1 (3): 295–311.
Published: 01 September 1989
Abstract
View article
PDF
What use can the brain make of the massive flow of sensory information that occurs without any associated rewards or punishments? This question is reviewed in the light of connectionist models of unsupervised learning and some older ideas, namely the cognitive maps and working models of Tolman and Craik, and the idea that redundancy is important for understanding perception (Attneave 1954), the physiology of sensory pathways (Barlow 1959), and pattern recognition (Watanabe 1960). It is argued that (1) The redundancy of sensory messages provides the knowledge incorporated in the maps or models. (2) Some of this knowledge can be obtained by observations of mean, variance, and covariance of sensory messages, and perhaps also by a method called “minimum entropy coding.” (3) Such knowledge may be incorporated in a model of “what usually happens” with which incoming messages are automatically compared, enabling unexpected discrepancies to be immediately identified. (4) Knowledge of the sort incorporated into such a filter is a necessary prerequisite of ordinary learning, and a representation whose elements are independent makes it possible to form associations with logical functions of the elements, not just with the elements themselves.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1989) 1 (1): 1–38.
Published: 01 March 1989
Abstract
View article
PDF
The performance of current speech recognition systems is far below that of humans. Neural nets offer the potential of providing massive parallelism, adaptation, and new algorithmic approaches to problems in speech recognition. Initial studies have demonstrated that multilayer networks with time delays can provide excellent discrimination between small sets of pre-segmented difficult-to-discriminate words, consonants, and vowels. Performance for these small vocabularies has often exceeded that of more conventional approaches. Physiological front ends have provided improved recognition accuracy in noise and a cochlea filter-bank that could be used in these front ends has been implemented using micro-power analog VLSI techniques. Techniques have been developed to scale networks up in size to handle larger vocabularies, to reduce training time, and to train nets with recurrent connections. Multilayer perceptron classifiers are being integrated into conventional continuous-speech recognizers. Neural net architectures have been developed to perform the computations required by vector quantizers, static pattern classifiers, and the Viterbi decoding algorithm. Further work is necessary for large-vocabulary continuous-speech problems, to develop training algorithms that progressively build internal word models, and to develop compact VLSI neural net hardware.