Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
William B. Levy
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (1998) 10 (1): 25–57.
Published: 01 January 1998
Abstract
View article
PDF
This article investigates the synaptic weight distribution of a selfsupervised, sparse, and randomly connected recurrent network inspired by hippocampal region CA3. This network solves nontrivial sequence prediction problems by creating, on a neuron-by-neuron basis, special patterns of cell firing called local context units. These specialized patterns of cell firing—possibly an analog of hippocampal place cells—allow accurate prediction of the statistical distribution of synaptic weights, and this distribution is not at all gaussian. Aside from the majority of synapses that are, at least functionally, lost due to synaptic depression, the distribution is approximately uniform. Unexpectedly, this result is relatively independent of the input environment, and the uniform distribution of synaptic weights can be approximately parameterized based solely on the average activity level. Next, the results are generalized to other cell firing types (frequency codes and stochastic firing) and place cell-like firing distributions. Finally, we note that our predictions concerning the synaptic strength distribution can be extended to the distribution of correlated cell firings. Recent published neurophysiological results are consistent with this extension.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (3): 531–543.
Published: 01 April 1996
Abstract
View article
PDF
In 1969 Barlow introduced the phrase “economy of impulses” to express the tendency for successive neural systems to use lower and lower levels of cell firings to produce equivalent encodings. From this viewpoint, the ultimate economy of impulses is a neural code of minimal redundancy. The hypothesis motivating our research is that energy expenditures, e.g., the metabolic cost of recovering from an action potential relative to the cost of inactivity, should also be factored into the economy of impulses. In fact, coding schemes with the largest representational capacity are not, in general, optimal when energy expenditures are taken into account. We show that for both binary and analog neurons, increased energy expenditure per neuron implies a decrease in average firing rate if energy efficient information transmission is to be maintained.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (1): 67–84.
Published: 01 January 1996
Abstract
View article
PDF
Reconstructing a time-varying stimulus estimate from a spike train (Bialek's “decoding” of a spike train) has become an important way to study neural information processing. In this paper, we describe a simple method for reconstructing a time-varying current injection signal from the simulated spike train it produces. This technique extracts most of the information from the spike train, provided that the input signal is appropriately matched to the spike generator. To conceptualize this matching, we consider spikes as instantaneous “samples” of the somatic current. The Sampling Theorem is then applicable, and it suggests that the bandwidth of the injected signal not exceed half the spike generator's average firing rate. The average firing rate, in turn, depends on the amplitude range and DC bias of the injected signal. We hypothesize that nature faces similar problems and constraints when transmitting a time-varying waveform from the soma of one neuron to the dendrite of the postsynaptic cell.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1994) 6 (1): 85–99.
Published: 01 January 1994
Abstract
View article
PDF
We investigate the dynamics of a class of recurrent random networks with sparse, asymmetric excitatory connectivity and global shunting inhibition mediated by a single interneuron. Using probabilistic arguments and a hyperbolic tangent approximation to the gaussian, we develop a simple method for setting the average level of firing activity in these networks. We demonstrate through simulations that our technique works well and extends to networks with more complicated inhibitory schemes. We are interested primarily in the CA3 region of the mammalian hippocampus, and the random networks investigated here are seen as modeling the a priori dynamics of activity in this region. In the presence of external stimuli, a suitable synaptic modification rule could shape this dynamics to perform temporal information processing tasks such as sequence completion and prediction.