Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
Zhigang Zeng
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2022) 34 (1): 104–137.
Published: 01 January 2022
Abstract
View articletitled, Bridging the Functional and Wiring Properties of V1 Neurons Through Sparse Coding
View
PDF
for article titled, Bridging the Functional and Wiring Properties of V1 Neurons Through Sparse Coding
The functional properties of neurons in the primary visual cortex (V1) are thought to be closely related to the structural properties of this network, but the specific relationships remain unclear. Previous theoretical studies have suggested that sparse coding, an energy-efficient coding method, might underlie the orientation selectivity of V1 neurons. We thus aimed to delineate how the neurons are wired to produce this feature. We constructed a model and endowed it with a simple Hebbian learning rule to encode images of natural scenes. The excitatory neurons fired sparsely in response to images and developed strong orientation selectivity. After learning, the connectivity between excitatory neuron pairs, inhibitory neuron pairs, and excitatory-inhibitory neuron pairs depended on firing pattern and receptive field similarity between the neurons. The receptive fields (RFs) of excitatory neurons and inhibitory neurons were well predicted by the RFs of presynaptic excitatory neurons and inhibitory neurons, respectively. The excitatory neurons formed a small-world network, in which certain local connection patterns were significantly overrepresented. Bidirectionally manipulating the firing rates of inhibitory neurons caused linear transformations of the firing rates of excitatory neurons, and vice versa. These wiring properties and modulatory effects were congruent with a wide variety of data measured in V1, suggesting that the sparse coding principle might underlie both the functional and wiring properties of V1 neurons.
Journal Articles
Multistability of Delayed Recurrent Neural Networks with Mexican Hat Activation Functions
UnavailablePublisher: Journals Gateway
Neural Computation (2017) 29 (2): 423–457.
Published: 01 February 2017
FIGURES
| View All (41)
Abstract
View articletitled, Multistability of Delayed Recurrent Neural Networks with Mexican Hat Activation Functions
View
PDF
for article titled, Multistability of Delayed Recurrent Neural Networks with Mexican Hat Activation Functions
This letter studies the multistability analysis of delayed recurrent neural networks with Mexican hat activation function. Some sufficient conditions are obtained to ensure that an -dimensional recurrent neural network can have equilibrium points with , and of them are locally exponentially stable. Furthermore, the attraction basins of these stable equilibrium points are estimated. We show that the attraction basins of these stable equilibrium points can be larger than their originally partitioned subsets. The results of this letter improve and extend the existing stability results in the literature. Finally, a numerical example containing different cases is given to illustrate the theoretical results.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (8): 2149–2182.
Published: 01 August 2007
Abstract
View articletitled, Analysis and Design of Associative Memories Based on Recurrent Neural Networks with Linear Saturation Activation Functions and Time-Varying Delays
View
PDF
for article titled, Analysis and Design of Associative Memories Based on Recurrent Neural Networks with Linear Saturation Activation Functions and Time-Varying Delays
In this letter, some sufficient conditions are obtained to guarantee recurrent neural networks with linear saturation activation functions, and time-varying delays have multiequilibria located in the saturation region and the boundaries of the saturation region. These results on pattern characterization are used to analyze and design autoassociative memories, which are directly based on the parameters of the neural networks. Moreover, a formula for the numbers of spurious equilibria is also derived. Four design procedures for recurrent neural networks with linear saturation activation functions and time-varying delays are developed based on stability results. Two of these procedures allow the neural network to be capable of learning and forgetting. Finally, simulation results demonstrate the validity and characteristics of the proposed approach.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2006) 18 (4): 848–870.
Published: 01 April 2006
Abstract
View articletitled, Multiperiodicity and Exponential Attractivity Evoked by Periodic External Inputs in Delayed Cellular Neural Networks
View
PDF
for article titled, Multiperiodicity and Exponential Attractivity Evoked by Periodic External Inputs in Delayed Cellular Neural Networks
We show that an n -neuron cellular neural network with time-varying delay can have 2 n periodic orbits located in saturation regions and these periodic orbits are locally exponentially attractive. In addition, we give some conditions for ascertaining periodic orbits to be locally or globally exponentially attractive and allow them to locate in any designated region. As a special case of exponential periodicity, exponential stability of delayed cellular neural networks is also characterized. These conditions improve and extend the existing results in the literature. To illustrate and compare the results, simulation results are discussed in three numerical examples.