Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-13 of 13
Eytan Ruppin
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (7): 1939–1961.
Published: 01 July 2007
Abstract
View article
PDF
We present and study the contribution-selection algorithm (CSA), a novel algorithm for feature selection. The algorithm is based on the multiperturbation shapley analysis (MSA), a framework that relies on game theory to estimate usefulness. The algorithm iteratively estimates the usefulness of features and selects them accordingly, using either forward selection or backward elimination. It can optimize various performance measures over unseen data such as accuracy, balanced error rate, and area under receiver-operator-characteristic curve. Empirical comparison with several other existing feature selection methods shows that the backward elimination variant of CSA leads to the most accurate classification results on an array of data sets.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2006) 18 (1): 119–142.
Published: 01 January 2006
Abstract
View article
PDF
This work presents a novel study of the notion of facial attractiveness in a machine learning context. To this end, we collected human beauty ratings for data sets of facial images and used various techniques for learning the attractiveness of a face. The trained predictor achieves a significant correlation of 0.65 with the average human ratings. The results clearly show that facial beauty is a universal concept that a machine can learn. Analysis of the accuracy of the beauty prediction machine as a function of the size of the training data indicates that a machine producing human-like attractiveness rating could be obtained given a moderately larger data set.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2004) 16 (9): 1887–1915.
Published: 01 September 2004
Abstract
View article
PDF
This letter presents the multi-perturbation Shapley value analysis (MSA), an axiomatic, scalable, and rigorous method for deducing causal function localization from multiple perturbations data. The MSA, based on fundamental concepts from game theory, accurately quantifies the contributions of network elements and their interactions, overcoming several shortcomings of previous function localization approaches. Its successful operation is demonstrated in both the analysis of a neurophysiological model and of reversible deactivation data. The MSA has a wide range of potential applications, including the analysis of reversible deactivation experiments, neuronal laser ablations, and transcranial magnetic stimulation “virtual lesions”, as well as in providing insight into the inner workings of computational models of neurophysiological systems.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (4): 885–913.
Published: 01 April 2003
Abstract
View article
PDF
This article presents a general approach for employing lesion analysis to address the fundamental challenge of localizing functions in a neural system. We describe functional contribution analysis (FCA), which assigns contribution values to the elements of the network such that the ability to predict the network's performance in response to multilesions is maximized. The approach is thoroughly examined on neurocontroller networks of evolved autonomous agents. The FCA portrays a stable set of neuronal contributions and accurate multilesion predictions that are significantly better than those obtained based on the classical single lesion approach. It is also used for a detailed synaptic analysis of the neurocontroller connectivity network, delineating its main functional backbone. The FCA provides a quantitative way of measuring how the network functions are localized and distributed among its elements. Our results question the adequacy of the classical single lesion analysis traditionally used in neuroscience and show that using lesioning experiments to decipher even simple neuronal systems requires a more rigorous multilesion analysis.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (4): 817–840.
Published: 01 April 2001
Abstract
View article
PDF
In this article we revisit the classical neuroscience paradigm of Hebbian learning. We find that it is difficult to achieve effective associative memory storage by Hebbian synaptic learning, since it requires network-level information at the synaptic level or sparse coding level. Effective learning can yet be achieved even with nonsparse patterns by a neuronal process that maintains a zero sum of the incoming synaptic efficacies. This weight correction improves the memory capacity of associative networks from an essentially bounded one to a memory capacity that scales linearly with network size. It also enables the effective storage of patterns with multiple levels of activity within a single network. Such neuronal weight correction can be successfully carried out by activity-dependent homeostasis of the neuron's synaptic efficacies, which was recently observed in cortical tissue. Thus, our findings suggest that associative learning by Hebbian synaptic learning should be accompanied by continuous remodeling of neuronally driven regulatory processes in the brain.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (3): 691–716.
Published: 01 March 2001
Abstract
View article
PDF
Using evolutionary simulations, we develop autonomous agents controlled by artificial neural networks (ANNs). In simple lifelike tasks of foraging and navigation, high performance levels are attained by agents equipped with fully recurrent ANN controllers. In a set of experiments sharing the same behavioral task but differing in the sensory input available to the agents, we find a common structure of a command neuron switching the dynamics of the network between radically different behavioral modes. When sensory position information is available, the command neuron reflects a map of the environment, acting as a location-dependent cell sensitive to the location and orientation of the agent. When such information is unavailable, the command neuron's activity is based on a spontaneously evolving short-term memory mechanism, which underlies its apparent place-sensitive activity. A two-parameter stochastic model for this memory mechanism is proposed. We show that the parameter values emerging from the evolutionary simulations are near optimal; evolution takes advantage of seemingly harmful features of the environment to maximize the agent's foraging efficiency. The accessibility of evolved ANNs for a detailed inspection, together with the resemblance of some of the results to known findings from neurobiology, places evolved ANNs as an excellent candidate model for the study of structure and function relationship in complex nervous systems.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1999) 11 (8): 2061–2080.
Published: 15 November 1999
Abstract
View article
PDF
Human and animal studies show that mammalian brains undergo massive synaptic pruning during childhood, losing about half of the synapses by puberty. We have previously shown that maintaining the network performance while synapses are deleted requires that synapses be properly modified and pruned, with the weaker synapses removed. We now show that neuronal regulation, a mechanism recently observed to maintain the average neuronal input field of a postsynaptic neuron, results in a weight-dependent synaptic modification. Under the correct range of the degradation dimension and synaptic upper bound, neuronal regulation removes the weaker synapses and judiciously modifies the remaining synapses. By deriving optimal synaptic modification functions in an excitatory-inhibitory network, we prove that neuronal regulation implements near-optimal synaptic modification and maintains the performance of a network undergoing massive synaptic pruning. These findings support the possibility that neural regulation complements the action of Hebbian synaptic changes in the self-organization of the developing brain.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1999) 11 (7): 1717–1737.
Published: 01 October 1999
Abstract
View article
PDF
Recent imaging studies suggest that object knowledge is stored in the brain as a distributed network of many cortical areas. Motivated by these observations, we study a multimodular associative memory network, whose functional goal is to store patterns with different coding levels—patterns that vary in the number of modules in which they are encoded. We show that in order to accomplish this task, synaptic inputs should be segregated into intramodular projections and intermodular projections, with the latter undergoing additional nonlinear dendritic processing. This segregation makes sense anatomically if the intermodular projections represent distal synaptic connections on apical dendrites. It is then straightforward to show that memories encoded in more modules are more resilient to focal afferent damage. Further hierarchical segregation of intermodular connections on the dendritic tree improves this resilience, allowing memory retrieval from input to just one of the modules in which it is encoded.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1998) 10 (7): 1759–1777.
Published: 01 October 1998
Abstract
View article
PDF
Research with humans and primates shows that the developmental course of the brain involves synaptic overgrowth followed by marked selective pruning. Previous explanations have suggested that this intriguing, seemingly wasteful phenomenon is utilized to remove “erroneous” synapses. We prove that this interpretation is wrong if synapses are Hebbian. Under limited metabolic energy resources restricting the amount and strength of synapses, we show that memory performance is maximized if synapses are first overgrown and then pruned following optimal “minimal-value” deletion. This optimal strategy leads to interesting insights concerning childhood amnesia.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1998) 10 (2): 451–465.
Published: 15 February 1998
Abstract
View article
PDF
Synaptic runaway denotes the formation of erroneous synapses and premature functional decline accompanying activity-dependent learning in neural networks. This work studies synaptic runaway both analytically and numerically in binary-firing associative memory networks. It turns out that synaptic runaway is of fairly moderate magnitude in these networks under normal, baseline conditions. However, it may become extensive if the threshold for Hebbian learning is reduced. These findings are combined with recent evidence for arrested N-methyl-D-aspartate (NMDA) maturation in schizophrenics, to formulate a new hypothesis concerning the pathogenesis of schizophrenic psychotic symptoms in neural terms.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1998) 10 (1): 1–18.
Published: 01 January 1998
Abstract
View article
PDF
Since their conception half a century ago, Hebbian cell assemblies have become a basic term in the neurosciences, and the idea that learning takes place through synaptic modifications has been accepted as a fundamental paradigm. As synapses undergo continuous metabolic turnover, adopting the stance that memories are engraved in the synaptic matrix raises a fundamental problem: How can memories be maintained for very long time periods? We present a novel solution to this long-standing question, based on biological evidence of neuronal regulation mechanisms that act to maintain neuronal activity. Our mechanism is developed within the framework of a neural model of associative memory. It is operative in conjunction with random activation of the memory system and is able to counterbalance degradation of synaptic weights and normalize the basins of attraction of all memories. Over long time periods, when the variance of the degradation process becomes important, the memory system stabilizes if its synapses are appropriately bounded. Thus, the remnant memory system is obtained by a dynamic process of synaptic selection and growth driven by neuronal regulatory mechanisms. Our model is a specific realization of dynamic stabilization of neural circuitry, which is often assumed to take place during sleep.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (6): 1227–1243.
Published: 01 August 1996
Abstract
View article
PDF
In the framework of an associative memory model, we study the interplay between synaptic deletion and compensation, and memory deterioration, a clinical hallmark of Alzheimer's disease. Our study is motivated by experimental evidence that there are regulatory mechanisms that take part in the homeostasis of neuronal activity and act on the neuronal level. We show that following synaptic deletion, synaptic compensation can be carried out efficiently by a local, dynamic mechanism, where each neuron maintains the profile of its incoming post-synaptic current. Our results open up the possibility that the primary factor in the pathogenesis of cognitive deficiencies in Alzheimer's disease (AD) is the failure of local neuronal regulatory mechanisms. Allowing for neuronal death, we observe two pathological routes in AD, leading to different correlations between the levels of structural damage and functional decline.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1995) 7 (5): 1105–1127.
Published: 01 September 1995
Abstract
View article
PDF
Current understanding of the effects of damage on neural networks is rudimentary, even though such understanding could lead to important insights concerning neurological and psychiatric disorders. Motivated by this consideration, we present a simple analytical framework for estimating the functional damage resulting from focal structural lesions to a neural network model. The effects of focal lesions of varying area, shape, and number on the retrieval capacities of a spatially organized associative memory are quantified, leading to specific scaling laws that may be further examined experimentally. It is predicted that multiple focal lesions will impair performance more than a single lesion of the same size, that slit like lesions are more damaging than rounder lesions, and that the same fraction of damage (relative to the total network size) will result in significantly less performance decrease in larger networks. Our study is clinically motivated by the observation that in multi-infarct dementia, the size of metabolically impaired tissue correlates with the level of cognitive impairment more than the size of structural damage. Our results account for the detrimental effect of the number of infarcts rather than their overall size of structural damage, and for the "multiplicative" interaction between Alzheimer's disease and multi-infarct dementia.