Skip to Main Content

The new concepts and architectures are still frequently tested on MNIST. However, we argue that the MNIST data set does not include temporal information and does not provide spike events generated from sensors. Compared to a static data set, a dynamic data set contains richer temporal features and therefore is more suitable to exploit an SNN's potential ability. The event-based benchmark data sets include N-MNIST (Orchard, Jayawant, Cohen, & Thakor, 2015), CIFAR10-DVS (Hongmin Li, Liu, Ji, Li, & Shi, 2017), N-CARS (Sironi, Brambilla, Bourdis, Lagorce, & Benosman, 2018), DVS-Gesture (Amir et al., 2017), and SHD (Cramer, Stradmann, Schemmel, & Zenke, 2020). Table 3 shows the models for developing SNNs—their architectures and learning type along with their accuracy rates on the neuromorphic data sets.

Table 3:

Summary of Recent SNN Learning Models and Their Accuracy on Event-Based Data Set.

ReferenceNetwork TypeLearning Rule and Structure ConfigurationData SetCA %
Kugele et al. (2020) SNN ANN-to-SNN conversion N-MNIST 95.54 
   CIFAR-DVS 66.61 
   DvsGesture 96.97 
   N-Cars 94.07 
Wu et al. (2018) Spiking MLP Spatiotemporal backpropagation (STBP) 34 × 34 × 2-800-10 N_MNIST 98.78 
Wu et al. (2019) SNN Spatiotemporal backpropagation (STBP) 128C3(Encoding)-128C3-AP2-384C3-384C3-AP2-1024FC-512FC-Voting N-MNIST 99.53 
   CIFAR-DVS 60.5 
 
Zheng et al. (2020) ResNet17 SNN Threshold-dependent batch normalization method based on spatiotemporal backpropagation (STBP-tdBN) CIFAR-DVS 67.80 
   DvsGesture 96.87 
 
 
Lee et al. (2016) SNN Supervised backpropagation (34 × 34 × 2)-800-10 N-MNIST 98.66 
Yao et al. (2021) Spiking CNN Temporal-wise attention SNN (TA-SNN) DvsGesture 98.61 
  (1) Input-MP4-64C3-128C3-AP2-128C3-AP2-256FC-11 CIFAR-DVS 72 
  (2) Input-32C3-AP2-64C3-AP2-128C3-AP2-256C3-AP2-512C3-AP4-256FC-10 SHD 91.08 
  (3) Input-128FC-128FC-20   
Neil and Liu (2016) Spiking CNN ANN-to-SNN conversion N-MNIST 95.72 
ReferenceNetwork TypeLearning Rule and Structure ConfigurationData SetCA %
Kugele et al. (2020) SNN ANN-to-SNN conversion N-MNIST 95.54 
   CIFAR-DVS 66.61 
   DvsGesture 96.97 
   N-Cars 94.07 
Wu et al. (2018) Spiking MLP Spatiotemporal backpropagation (STBP) 34 × 34 × 2-800-10 N_MNIST 98.78 
Wu et al. (2019) SNN Spatiotemporal backpropagation (STBP) 128C3(Encoding)-128C3-AP2-384C3-384C3-AP2-1024FC-512FC-Voting N-MNIST 99.53 
   CIFAR-DVS 60.5 
 
Zheng et al. (2020) ResNet17 SNN Threshold-dependent batch normalization method based on spatiotemporal backpropagation (STBP-tdBN) CIFAR-DVS 67.80 
   DvsGesture 96.87 
 
 
Lee et al. (2016) SNN Supervised backpropagation (34 × 34 × 2)-800-10 N-MNIST 98.66 
Yao et al. (2021) Spiking CNN Temporal-wise attention SNN (TA-SNN) DvsGesture 98.61 
  (1) Input-MP4-64C3-128C3-AP2-128C3-AP2-256FC-11 CIFAR-DVS 72 
  (2) Input-32C3-AP2-64C3-AP2-128C3-AP2-256C3-AP2-512C3-AP4-256FC-10 SHD 91.08 
  (3) Input-128FC-128FC-20   
Neil and Liu (2016) Spiking CNN ANN-to-SNN conversion N-MNIST 95.72 

Close Modal

or Create an Account

Close Modal
Close Modal