Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-5 of 5
Yoram Baram
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2018) 30 (11): 3037–3071.
Published: 01 November 2018
FIGURES
| View All (15)
Abstract
View article
PDF
Experimental constraints have traditionally implied separate studies of different cortical functions, such as memory and sensory-motor control. Yet certain cortical modalities, while repeatedly observed and reported, have not been clearly identified with one cortical function or another. Specifically, while neuronal membrane and synapse polarities with respect to a certain potential value have been attracting considerable interest in recent years, the purposes of such polarities have largely remained a subject for speculation and debate. Formally identifying these polarities as on-off neuronal polarity gates, we analytically show that cortical circuit structure, behavior, and memory are all governed by the combined potent effect of these gates, which we collectively term circuit polarity . Employing widely accepted and biologically validated firing rate and plasticity paradigms, we show that circuit polarity is mathematically embedded in the corresponding models. Moreover, we show that the firing rate dynamics implied by these models are driven by ongoing circuit polarity gating dynamics. Furthermore, circuit polarity is shown to segregate cortical circuits into internally synchronous, externally asynchronous subcircuits, defining their firing rate modes in accordance with different cortical tasks. In contrast to the Hebbian paradigm, which is shown to be susceptible to mutual neuronal interference in the face of asynchrony, circuit polarity is shown to block such interference. Noting convergence of synaptic weights, we show that circuit polarity holds the key to cortical memory, having a segregated capacity linear in the number of neurons. While memory concealment is implied by complete neuronal silencing, memory is restored by reactivating the original circuit polarity. Finally, we show that incomplete deterioration or restoration of circuit polarity results in memory modification, which may be associated with partial or false recall, or novel innovation.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2012) 24 (3): 676–699.
Published: 01 March 2012
FIGURES
| View All (4)
Abstract
View article
PDF
Widely accepted neural firing and synaptic potentiation rules specify a cross-dependence of the two processes, which, evolving on different timescales, have been separated for analytic purposes, concealing essential dynamics. Here, the morphology of the firing rates process, modulated by synaptic potentiation, is shown to be described by a discrete iteration map in the form of a thresholded polynomial. Given initial synaptic weights, a firing activity is triggered by conductance. Elementary dynamic modes are defined by fixed points, cycles, and saddles of the map, building blocks of the underlying firing code. Showing parameter-dependent multiplicity of real polynomial roots, the map is proved to be noninvertible. The incidence of chaos is then implied by the parameter-dependent existence of snap-back repellers. The highly patterned geometric and statistical structures of the associated chaotic attractors suggest that these attractors are an integral part of the neural code. It further suggests the chaotic attractor as a natural mechanism for statistical encoding and temporal multiplexing of neural information. The analytic findings are supported by simulation.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (6): 1264–1275.
Published: 01 June 2005
Abstract
View article
PDF
Kernels are key components of pattern recognition mechanisms. We propose a universal kernel optimality criterion, which is independent of the classifier to be used. Defining data polarization as a process by which points of different classes are driven to geometrically opposite locations in a confined domain, we propose selecting the kernel parameter values that polarize the data in the associated feature space. Conversely, the kernel is said to be polarized by the data. Kernel polarization gives rise to an unconstrained optimization problem. We show that complete kernel polarization yields consistent classification by kernel-sum classifiers. Tested on real-life data, polarized kernels demonstrate a clear advantage over the Euclidean distance in proximity classifiers. Embedded in a support vectors classifier, kernel polarization is found to yield about the same performance as exhaustive parameter search.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (11): 2549–2572.
Published: 01 November 2001
Abstract
View article
PDF
We propose a new Markov Chain Monte Carlo algorithm, which is a generalization of the stochastic dynamics method. The algorithm performs exploration of the state-space using its intrinsic geometric structure, which facilitates efficient sampling of complex distributions. Applied to Bayesian learning in neural networks, our algorithm was found to produce results comparable to the best state-of-the-art method while consuming considerably less time.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (11): 2533–2548.
Published: 01 November 2001
Abstract
View article
PDF
Real classification problems involve structured data that can be essentially grouped into a relatively small number of clusters. It is shown that, under a local clustering condition, a set of points of a given class, embedded in binary space by a set of randomly parameterized surfaces, is linearly separable from other classes, with arbitrarily high probability. We call such a data set a local relative cluster . The size of the embedding set is shown to be inversely proportional to the squared local clustering degree. A simple parameterization by embedding hyperplanes, implementing a voting system, results in a random reduction of the nearest-neighbor method and leads to the separation of multicluster data by a network with two internal layers. This represents a considerable reduction of the learning problem with respect to known techniques, resolving a long-standing question on the complexity of random embedding. Numerical tests show that the proposed method performs as well as state-of the-art methods and in a small fraction of the time.