Skip Nav Destination

*PDF*
*PDF*
*PDF*
*PDF*
*PDF*

Update search

### NARROW

Format

Journal

TocHeadingTitle

Date

Availability

1-5 of 5

Erol Gelenbe

Close
**Follow your search**

Access your saved searches in your account

Would you like to receive an alert when new items match your search?

*Close Modal*

Sort by

Journal Articles

Publisher: Journals Gateway

*Neural Computation*(2008) 20 (9): 2308–2324.

Published: 01 September 2008

FIGURES
| View All (4)

Abstract

View article
Large-scale distributed systems, such as natural neuronal and artificial systems, have many local interconnections, but they often also have the ability to propagate information very fast over relatively large distances. Mechanisms that enable such behavior include very long physical signaling paths and possibly saccades of synchronous behavior that may propagate across a network. This letter studies the modeling of such behaviors in neuronal networks and develops a related learning algorithm. This is done in the context of the random neural network (RNN), a probabilistic model with a well-developed mathematical theory, which was inspired by the apparently stochastic spiking behavior of certain natural neuronal systems. Thus, we develop an extension of the RNN to the case when synchronous interactions can occur, leading to synchronous firing by large ensembles of cells. We also present an O ( N 3 ) gradient descent learning algorithm for an N -cell recurrent network having both conventional excitatory-inhibitory interactions and synchronous interactions. Finally, the model and its learning algorithm are applied to a resource allocation problem that is NP-hard and requires fast approximate decisions.

Journal Articles

Publisher: Journals Gateway

*Neural Computation*(1999) 11 (4): 953–963.

Published: 15 May 1999

Abstract

View article
By extending the pulsed recurrent random neural network (RNN) discussed in Gelenbe (1989, 1990, 1991), we propose a recurrent random neural network model in which each neuron processes several distinctly characterized streams of “signals” or data. The idea that neurons may be able to distinguish between the pulses they receive and use them in a distinct manner is biologically plausible. In engineering applications, the need to process different streams of information simultaneously is commonplace (e.g., in image processing, sensor fusion, or parallel processing systems). In the model we propose, each distinct stream is a class of signals in the form of spikes. Signals may arrive to a neuron from either the outside world (exogenous signals) or other neurons (endogenous signals). As a function of the signals it has received, a neuron can fire and then send signals of some class to another neuron or to the outside world. We show that the multiple signal class random model with exponential interfiring times, Poisson external signal arrivals, and Markovian signal movements between neurons has product form; this implies that the distribution of its state (i.e., the probability that each neuron of the network is excited) can be computed simply from the solution of a system of 2 Cn simultaneous nonlinear equations where C is the number of signal classes and n is the number of neurons. Here we derive the stationary solution for the multiple class model and establish necessary and sufficient conditions for the existence of the stationary solution. The recurrent random neural network model with multiple classes has already been successfully applied to image texture generation (Atalay & Gelenbe, 1992), where multiple signal classes are used to model different colors in the image.

Journal Articles

Publisher: Journals Gateway

*Neural Computation*(1993) 5 (1): 154–164.

Published: 01 January 1993

Abstract

View article
The capacity to learn from examples is one of the most desirable features of neural network models. We present a learning algorithm for the recurrent random network model (Gelenbe 1989, 1990) using gradient descent of a quadratic error function. The analytical properties of the model lead to a "backpropagation" type algorithm that requires the solution of a system of n linear and n nonlinear equations each time the n -neuron network "learns" a new input-output pair.

Journal Articles

Publisher: Journals Gateway

*Neural Computation*(1990) 2 (2): 239–247.

Published: 01 June 1990

Abstract

View article
In a recent paper (Gelenbe 1989) we introduced a new neural network model, called the Random Network, in which “negative” or “positive” signals circulate, modeling inhibitory and excitatory signals. These signals can arrive either from other neurons or from the outside world: they are summed at the input of each neuron and constitute its signal potential. The state of each neuron in this model is its signal potential, while the network state is the vector of signal potentials at each neuron. If its potential is positive, a neuron fires, and sends out signals to the other neurons of the network or to the outside world. As it does so its signal potential is depleted. We have shown (Gelenbe 1989) that in the Markovian case, this model has product form, that is, the steady-state probability distribution of its potential vector is the product of the marginal probabilities of the potential at each neuron. The signal flow equations of the network, which describe the rate at which positive or negative signals arrive at each neuron, are nonlinear, so that their existence and uniqueness are not easily established except for the case of feedforward (or backpropagation) networks (Gelenbe 1989). In this paper we show that whenever the solution to these signal flow equations exists, it is unique. We then examine two subclasses of networks — balanced and damped networks — and obtain stability conditions in each case. In practical terms, these stability conditions guarantee that the unique solution can be found to the signal flow equations and therefore that the network has a well-defined steady-state behavior.

Journal Articles

Publisher: Journals Gateway

*Neural Computation*(1989) 1 (4): 502–510.

Published: 01 December 1989

Abstract

View article
We introduce a new class of random “neural” networks in which signals are either negative or positive. A positive signal arriving at a neuron increases its total signal count or potential by one; a negative signal reduces it by one if the potential is positive, and has no effect if it is zero. When its potential is positive, a neuron “fires,” sending positive or negative signals at random intervals to neurons or to the outside. Positive signals represent excitatory signals and negative signals represent inhibition. We show that this model, with exponential signal emission intervals, Poisson external signal arrivals, and Markovian signal movements between neurons, has a product form leading to simple analytical expressions for the system state.