Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-6 of 6
Huajin Tang
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2022) 34 (5): 1170–1188.
Published: 15 April 2022
FIGURES
Abstract
View article
PDF
Recent work on spiking neural networks (SNNs) has focused on achieving deep architectures. They commonly use backpropagation (BP) to train SNNs directly, which allows SNNs to go deeper and achieve higher performance. However, the BP training procedure is computing intensive and complicated by many trainable parameters. Inspired by global pooling in convolutional neural networks (CNNs), we present the spike probabilistic global pooling (SPGP) method based on a probability function for training deep convolutional SNNs. It aims to remove the difficulty of too many trainable parameters brought by multiple layers in the training process, which can reduce the risk of overfitting and get better performance for deep SNNs (DSNNs). We use the discrete leaky-integrate-fire model and the spatiotemporal BP algorithm for training DSNNs directly. As a result, our model trained with the SPGP method achieves competitive performance compared to the existing DSNNs on image and neuromorphic data sets while minimizing the number of trainable parameters. In addition, the proposed SPGP method shows its effectiveness in performance improvement, convergence, and generalization ability.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Neural Computation (2021) 33 (9): 2439–2472.
Published: 19 August 2021
Abstract
View article
PDF
Learning new concepts rapidly from a few examples is an open issue in spike-based machine learning. This few-shot learning imposes substantial challenges to the current learning methodologies of spiking neuron networks (SNNs) due to the lack of task-related priori knowledge. The recent learning-to-learn (L2L) approach allows SNNs to acquire priori knowledge through example-level learning and task-level optimization. However, existing L2L-based frameworks do not target the neural dynamics (i.e., neuronal and synaptic parameter changes) on different timescales. This diversity of temporal dynamics is an important attribute in spike-based learning, which facilitates the networks to rapidly acquire knowledge from very few examples and gradually integrate this knowledge. In this work, we consider the neural dynamics on various timescales and provide a multi-timescale optimization (MTSO) framework for SNNs. This framework introduces an adaptive-gated LSTM to accommodate two different timescales of neural dynamics: short-term learning and long-term evolution. Short-term learning is a fast knowledge acquisition process achieved by a novel surrogate gradient online learning (SGOL) algorithm, where the LSTM guides gradient updating of SNN on a short timescale through an adaptive learning rate and weight decay gating. The long-term evolution aims to slowly integrate acquired knowledge and form a priori, which can be achieved by optimizing the LSTM guidance process to tune SNN parameters on a long timescale. Experimental results demonstrate that the collaborative optimization of multi-timescale neural dynamics can make SNNs achieve promising performance for the few-shot learning tasks.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2020) 32 (1): 182–204.
Published: 01 January 2020
FIGURES
| View All (9)
Abstract
View article
PDF
A spiking neural network (SNN) is a type of biological plausibility model that performs information processing based on spikes. Training a deep SNN effectively is challenging due to the nondifferention of spike signals. Recent advances have shown that high-performance SNNs can be obtained by converting convolutional neural networks (CNNs). However, the large-scale SNNs are poorly served by conventional architectures due to the dynamic nature of spiking neurons. In this letter, we propose a hardware architecture to enable efficient implementation of SNNs. All layers in the network are mapped on one chip so that the computation of different time steps can be done in parallel to reduce latency. We propose new spiking max-pooling method to reduce computation complexity. In addition, we apply approaches based on shift register and coarsely grained parallels to accelerate convolution operation. We also investigate the effect of different encoding methods on SNN accuracy. Finally, we validate the hardware architecture on the Xilinx Zynq ZCU102. The experimental results on the MNIST data set show that it can achieve an accuracy of 98.94% with eight-bit quantized weights. Furthermore, it achieves 164 frames per second (FPS) under 150 MHz clock frequency and obtains 41 × speed-up compared to CPU implementation and 22 times lower power than GPU implementation.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2019) 31 (12): 2368–2389.
Published: 01 December 2019
FIGURES
| View All (10)
Abstract
View article
PDF
Though succeeding in solving various learning tasks, most existing reinforcement learning (RL) models have failed to take into account the complexity of synaptic plasticity in the neural system. Models implementing reinforcement learning with spiking neurons involve only a single plasticity mechanism. Here, we propose a neural realistic reinforcement learning model that coordinates the plasticities of two types of synapses: stochastic and deterministic. The plasticity of the stochastic synapse is achieved by the hedonistic rule through modulating the release probability of synaptic neurotransmitter, while the plasticity of the deterministic synapse is achieved by a variant of a reward-modulated spike-timing-dependent plasticity rule through modulating the synaptic strengths. We evaluate the proposed learning model on two benchmark tasks: learning a logic gate function and the 19-state random walk problem. Experimental results show that the coordination of diverse synaptic plasticities can make the RL model learn in a rapid and stable form.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (2): 450–472.
Published: 01 February 2013
FIGURES
| View All (11)
Abstract
View article
PDF
During the past few decades, remarkable progress has been made in solving pattern recognition problems using networks of spiking neurons. However, the issue of pattern recognition involving computational process from sensory encoding to synaptic learning remains underexplored, as most existing models or algorithms target only part of the computational process. Furthermore, many learning algorithms proposed in the literature neglect or pay little attention to sensory information encoding, which makes them incompatible with neural-realistic sensory signals encoded from real-world stimuli. By treating sensory coding and learning as a systematic process, we attempt to build an integrated model based on spiking neural networks (SNNs), which performs sensory neural encoding and supervised learning with precisely timed sequences of spikes. With emerging evidence of precise spike-timing neural activities, the view that information is represented by explicit firing times of action potentials rather than mean firing rates has been receiving increasing attention. The external sensory stimulation is first converted into spatiotemporal patterns using a latency-phase encoding method and subsequently transmitted to the consecutive network for learning. Spiking neurons are trained to reproduce target signals encoded with precisely timed spikes. We show that when a supervised spike-timing-based learning is used, different spatiotemporal patterns are recognized by different spike patterns with a high time precision in milliseconds.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (7): 1899–1926.
Published: 01 July 2010
FIGURES
| View All (12)
Abstract
View article
PDF
Memory is a fundamental part of computational systems like the human brain. Theoretical models identify memories as attractors of neural network activity patterns based on the theory that attractor (recurrent) neural networks are able to capture some crucial characteristics of memory, such as encoding, storage, retrieval, and long-term and working memory. In such networks, long-term storage of the memory patterns is enabled by synaptic strengths that are adjusted according to some activity-dependent plasticity mechanisms (of which the most widely recognized is the Hebbian rule) such that the attractors of the network dynamics represent the stored memories. Most of previous studies on associative memory are focused on Hopfield-like binary networks, and the learned patterns are often assumed to be uncorrelated in a way that minimal interactions between memories are facilitated. In this letter, we restrict our attention to a more biological plausible attractor network model and study the neuronal representations of correlated patterns. We have examined the role of saliency weights in memory dynamics. Our results demonstrate that the retrieval process of the memorized patterns is characterized by the saliency distribution, which affects the landscape of the attractors. We have established the conditions that the network state converges to unique memory and multiple memories. The analytical result also holds for other cases for variable coding levels and nonbinary levels, indicating a general property emerging from correlated memories. Our results confirmed the advantage of computing with graded-response neurons over binary neurons (i.e., reducing of spurious states). It was also found that the nonuniform saliency distribution can contribute to disappearance of spurious states when they exit.