Skip Nav Destination
Close Modal
Update search
NARROW
Date
Availability
1-16 of 16
View
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2017) 29 (9): 2293–2351.
Published: 01 September 2017
FIGURES
| View All (20)
Abstract
View article
PDF
Today digital sources supply a historically unprecedented component of human sensorimotor data, the consumption of which is correlated with poorly understood maladies such as Internet addiction disorder and Internet gaming disorder. Because both natural and digital sensorimotor data share common mathematical descriptions, one can quantify our informational sensorimotor needs using the signal processing metrics of entropy, noise, dimensionality, continuity, latency, and bandwidth. Such metrics describe in neutral terms the informational diet human brains require to self-calibrate, allowing individuals to maintain trusting relationships. With these metrics, we define the trust humans experience using the mathematical language of computational models, that is, as a primitive statistical algorithm processing finely grained sensorimotor data from neuromechanical interaction. This definition of neuromechanical trust implies that artificial sensorimotor inputs and interactions that attract low-level attention through frequent discontinuities and enhanced coherence will decalibrate a brain's representation of its world over the long term by violating the implicit statistical contract for which self-calibration evolved. Our hypersimplified mathematical understanding of human sensorimotor processing as multiscale, continuous-time vibratory interaction allows equally broad-brush descriptions of failure modes and solutions. For example, we model addiction in general as the result of homeostatic regulation gone awry in novel environments (sign reversal) and digital dependency as a sub-case in which the decalibration caused by digital sensorimotor data spurs yet more consumption of them. We predict that institutions can use these sensorimotor metrics to quantify media richness to improve employee well-being; that dyads and family-size groups will bond and heal best through low-latency, high-resolution multisensory interaction such as shared meals and reciprocated touch; and that individuals can improve sensory and sociosensory resolution through deliberate sensory reintegration practices. We conclude that we humans are the victims of our own success, our hands so skilled they fill the world with captivating things, our eyes so innocent they follow eagerly.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (9): 2235–2264.
Published: 01 September 2013
FIGURES
| View All (18)
Abstract
View article
PDF
Current population coding methods, including weighted averaging and Bayesian estimation, are based on extrinsic representations. These require that neurons be labeled with response parameters, such as tuning curve peaks or noise distributions, which are tied to some external, world-based metric scale. Firing rates alone, without this external labeling, are insufficient to represent a variable. However, the extrinsic approach does not explain how such neural labeling is implemented. A radically different and perhaps more physiological approach is based on intrinsic representations, which have access only to firing rates. Because neurons are unlabeled, intrinsic coding represents relative, rather than absolute, values of a variable. We show that intrinsic coding has representational advantages, including invariance, categorization, and discrimination, and in certain situations it may also recover absolute stimulus values.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2011) 23 (12): 3001–3015.
Published: 01 December 2011
Abstract
View article
PDF
Neuroscience folklore has it that somatotopy in human primary somatosensory cortex (SI) has two significant discontinuities: the hands and face map onto adjacent regions in SI, as do the feet and genitalia. It has been proposed that these conjunctions in SI result from coincident sources of stimulation in the fetal position, where the hands frequently touch the face, and the feet the genitalia. Computer modeling using a Hebbian variant of the self-organizing Kohonen net is consistent with this proposal. However, recent work reveals that the genital representation in SI for cutaneous sensations (as opposed to tumescence) is continuous with that of the lower trunk and thigh. This result, in conjunction with reports of separate face innervation and its earlier onset of sensory function, compared to that of the rest of the body, allows a reappraisal of homuncular organization. It is proposed that the somatosensory homunculus comprises two distinct somatotopic regions: the face representation and that of the rest of the body. Principles of self-organization do not account satisfactorily for the overall homuncular map. These results may serve to alert computational modelers that intrinsic developmental factors can override simple rules of plasticity.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2011) 23 (11): 2731–2745.
Published: 01 November 2011
FIGURES
Abstract
View article
PDF
Characterizing neural spiking activity as a function of intrinsic and extrinsic factors is important in neuroscience. Point process models are valuable for capturing such information; however, the process of fully applying these models is not always obvious. A complete model application has four broad steps: specification of the model, estimation of model parameters given observed data, verification of the model using goodness of fit, and characterization of the model using confidence bounds. Of these steps, only the first three have been applied widely in the literature, suggesting the need to dedicate a discussion to how the time-rescaling theorem, in combination with parametric bootstrap sampling, can be generally used to compute confidence bounds of point process models. In our first example, we use a generalized linear model of spiking propensity to demonstrate that confidence bounds derived from bootstrap simulations are consistent with those computed from closed-form analytic solutions. In our second example, we consider an adaptive point process model of hippocampal place field plasticity for which no analytical confidence bounds can be derived. We demonstrate how to simulate bootstrap samples from adaptive point process models, how to use these samples to generate confidence bounds, and how to statistically test the hypothesis that neural representations at two time points are significantly different. These examples have been designed as useful guides for performing scientific inference based on point process models.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (8): 1985–2003.
Published: 01 August 2007
Abstract
View article
PDF
In a recent article, Prinz, Bucher, and Marder (2004) addressed the fundamental question of whether neural systems are built with a fixed blueprint of tightly controlled parameters or in a way in which properties can vary largely from one individual to another, using a database modeling approach. Here, we examine the main conclusion that neural circuits indeed are built with largely varying parameters in the light of our own experimental and modeling observations. We critically discuss the experimental and theoretical evidence, including the general adequacy of database approaches for questions of this kind, and come to the conclusion that the last word for this fundamental question has not yet been spoken.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (9): 2013–2027.
Published: 01 September 2003
Abstract
View article
PDF
Ongoing “spontaneous” electrical field potentials of assemblies of neurons in the brains of diverse animal groups differ widely in character and amplitude without obvious explanation. There may be correlates with other measures of brain complexity, such as histological differentiation, but so far there are no known differences between the EEGs of humans and other mammals or between mammals and reptiles, amphibians or fish, apart from amplitude. The proposition is defended that further search for descriptors or statistical, probably nonlinear features of the time series will reveal consistent differences, meaning that we have so far missed major features of the natural history of EEGs, just as we have thus far relatively neglected the identification of features of the physiology of the brain relevant to its evolution of complexity through major grades of phyla, classes, and orders.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (3): 473–491.
Published: 01 March 2002
Abstract
View article
PDF
Many observers see geometric visual hallucinations after taking hallucinogens such as LSD, cannabis, mescaline or psilocybin; on viewing bright flickering lights; on waking up or falling asleep; in “near-death” experiences; and in many other syndromes. Klüver organized the images into four groups called form constants : (I) tunnels and funnels, (II) spirals, (III) lattices, including honeycombs and triangles, and (IV) cobwebs. In most cases, the images are seen in both eyes and move with them. We interpret this to mean that they are generated in the brain. Here, we summarize a theory of their origin in visual cortex (area V1), based on the assumption that the form of the retino–cortical map and the architecture of V1 determine their geometry. (A much longer and more detailed mathematical version has been published in Philosophical Transactions of the Royal Society B, 356 [2001].) We model V1 as the continuum limit of a lattice of interconnected hypercolumns, each comprising a number of interconnected iso-orientation columns. Based on anatomical evidence, we assume that the lateral connectivity between hypercolumns exhibits symmetries, rendering it invariant under the action of the Euclidean group E(2), composed of reflections and translations in the plane, and a (novel) shift-twist action. Using this symmetry, we show that the various patterns of activity that spontaneously emerge when V1's spatially uniform resting state becomes unstable correspond to the form constants when transformed to the visual field using the retino-cortical map. The results are sensitive to the detailed specification of the lateral connectivity and suggest that the cortical mechanisms that generate geometric visual hallucinations are closely related to those used to process edges, contours, surfaces, and textures.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (1): 1–19.
Published: 01 January 2002
Abstract
View article
PDF
How different factors contribute to determine the time course of the basic element of fast glutamate-mediated excitatory postsynaptic currents (mEPSCs) in the central nervous system has been a focus of interest of neurobiologists for some years. In spite of intensive investigations, these mechanisms are not well understood. In this review, basic hypotheses are summarized, and a new hypothesis is proposed, which holds that desensitization of AMPA receptros plays a major role in shaping the time course of fast mEPSCs. According to the new hypothesis, desensitization shortens the time course of mEPSCs largely by reducing the buffering of glutamate molecules by AMPA receptors. The hypothesis accounts for numerous findings on fast mEPSCs and is expected to be equally fruitful as a framework for further experimental and theoretical investigations.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (6): 1199–1241.
Published: 01 June 2001
Abstract
View article
PDF
Computational models in cognitive neuroscience should ideally use biological properties and powerful computational principles to produce behavior consistent with psychological findings. Error-driven backpropagation is computationally powerful and has proven useful for modeling a range of psychological data but is not biologically plausible. Several approaches to implementing backpropagation in a biologically plausible fashion converge on the idea of using bid irectional activation propagation in interactive networks to convey error signals. This article demonstrates two main points about these error-driven interactive networks: (1) they generalize poorly due to attractor dynamics that interfere with the network's ability to produce novel combinatorial representations systematically in response to novel inputs, and (2) this generalization problem can be remedied by adding two widely used mechanistic principles, inhibitory competition and Hebbian learning, that can be independently motivated for a variety of biological, psychological, and computational reasons. Simulations using the Leabra algorithm, which combines the generalized recirculation (GeneRec), biologically plausible, error-driven learning algorithm with inhibitory competition and Hebbian learning, show that these mechanisms can result in good generalization in interactive networks. These results support the general conclusion that cognitive neuroscience models that incorporate the core mechanistic principles of interactivity, inhibitory competition, and error-driven and Hebbian learning satisfy a wider range of biological, psychological, and computational constraints than models employing a subset of these principles.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2000) 12 (12): 2777–2795.
Published: 01 December 2000
Abstract
View article
PDF
The use of sets of spatiotemporal cortical potential distributions (CPDs) as the basis for cognitive information processing results in a very large space of cognitive elements with natural metrics. Results obtained from current source density (CSD) analysis suggest that in the CPD picture, action potentials may make only a relatively minor contribution to the brain's code. In order to establish if two CPDs are close, we consider standard metrics in spaces of continuous functions, and these may be employed to ascertain if two stimuli will be identified as the same. The correspondence between the electrical activity within brain regions, including not only action potentials but all postsynaptic potentials (PSPs), and CPDs is considered. We examine the possibility of using the CSD approach to find potential distributions using the descriptive approach in which precise sets of times are ascribed to the occurrence of action potentials and PSPs. Using metrics in the multidimensional space of paths of collections of point processes, we show that closeness of CPDs is implied by closeness of sets of spike times and PSP times if a certain metric is used but not others. We also set forth a dynamical model consisting of a system of reaction-diffusion equations for ionic concentrations coupled with nerve membrane potential equations and active transport systems. Making the approximation of a descriptive approach, the correspondence between sets of spike times and PSP times and CPDs is obtained as with the CSD method. However, since it is not possible to ascribe precise times to the occurrence of PSPs and action potentials, the descriptive approach cannot be used to describe the configuration of electrical activity in cortical regions accurately. We also discuss how the CPD framework relates to the binding problem and submillisecond timing.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1999) 11 (6): 1261–1280.
Published: 15 August 1999
Abstract
View article
PDF
When the nervous system is presented with multiple simultaneous inputs of some variable, such as wavelength or disparity, they can be combined to give rise to qualitatively new percepts that cannot be produced by any single input value. For example, there is no single wavelength that appears white. Many models of decoding neural population codes have problems handling multiple inputs, either attempting to extract a single value of the input parameter or, in some cases, registering the presence of multiple inputs without synthesizing them into something new. These examples raise a more general issue regarding the interpretation of population codes. We propose that population decoding involves not the extraction of specific values of the physical inputs, but rather a transformation from the input space to some abstract representational space that is not simply related to physical parameters. As a specific example, a four-layer network is presented that implements a transformation from wavelength to a high-level hue-saturation color space.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1998) 10 (7): 1601–1638.
Published: 01 October 1998
Abstract
View article
PDF
We review the pros and cons of analog and digital computation. We propose that computation that is most efficient in its use of resources is neither analog computation nor digital computation but, rather, a mixture of the two forms. For maximum efficiency, the information and information-processing resources of the hybrid form must be distributed over many wires, with an optimal signal-to-noise ratio per wire. Our results suggest that it is likely that the brain computes in a hybrid fashion and that an underappreciated and important reason for the efficiency of the human brain, which consumes only 12 W, is the hybrid and distributed nature of its architecture.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1995) 7 (1): 1–12.
Published: 01 January 1995
Journal Articles
Publisher: Journals Gateway
Neural Computation (1992) 4 (1): 1–58.
Published: 01 January 1992
Abstract
View article
PDF
Feedforward neural networks trained by error backpropagation are examples of nonparametric regression estimators. We present a tutorial on nonparametric inference and its relation to neural networks, and we use the statistical viewpoint to highlight strengths and weaknesses of neural models. We illustrate the main points with some recognition experiments involving artificial data as well as handwritten numerals. In way of conclusion, we suggest that current-generation feedforward neural networks are largely inadequate for difficult problems in machine perception and machine learning, regardless of parallel-versus-serial hardware or other implementation issues. Furthermore, we suggest that the fundamental challenges in neural modeling are about representation rather than learning per se. This last point is supported by additional experiments with handwritten numerals.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1991) 3 (4): 461–483.
Published: 01 December 1991
Abstract
View article
PDF
Many neural network classifiers provide outputs which estimate Bayesian a posteriori probabilities. When the estimation is accurate, network outputs can be treated as probabilities and sum to one. Simple proofs show that Bayesian probabilities are estimated when desired network outputs are 1 of M (one output unity, all others zero) and a squared-error or cross-entropy cost function is used. Results of Monte Carlo simulations performed using multilayer perceptron (MLP) networks trained with backpropagation, radial basis function (RBF) networks, and high-order polynomial networks graphically demonstrate that network outputs provide good estimates of Bayesian probabilities. Estimation accuracy depends on network complexity, the amount of training data, and the degree to which training data reflect true likelihood distributions and a priori class probabilities. Interpretation of network outputs as Bayesian probabilities allows outputs from multiple networks to be combined for higher level decision making, simplifies creation of rejection thresholds, makes it possible to compensate for differences between pattern class probabilities in training and test data, allows outputs to be used to minimize alternative risk functions, and suggests alternative measures of network performance.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1991) 3 (3): 293–309.
Published: 01 September 1991
Abstract
View article
PDF
Representing and manipulating context information is one of the hardest problems in natural language processing. This paper proposes a method for representing some context information so that the correct meaning for a word in a sentence can be selected. The approach is primarily based on work by Waltz and Pollack (1985, 1984), who emphasized neutrally plausible systems. By contrast this paper focuses on computationally feasible methods applicable to full-scale natural language processing systems. There are two key elements: a collection of context vectors defined for every word used by a natural language processing system, and a context algorithm that computes a dynamic context vector at any position in a body of text. Once the dynamic context vector has been computed it is easy to choose among competing meanings for a word. This choice of definitions is essentially a neural network computation, and neural network learning algorithms should be able to improve such choices. Although context vectors do not represent all context information, their use should improve those full-scale systems that have avoided context as being too difficult to deal with. Good candidates for full-scale context vector implementations are machine translation systems and Japanese word processors. A main goal of this paper is to encourage such large-scale implementations and tests of context vector approaches. A variety of interesting directions for research in natural language processing and machine learning will be possible once a full set of context vectors has been created. In particular the development of more powerful context algorithms will be an important topic for future research.