Table 6,0 presents0 the performance of recent FPGA-based implementations of SNNs in terms of network configuration, system performance, and target device.

Table 6:

Summary of FGPA-Based Implementation of SNNs.

ReferenceNetwork Type/Neuron and Encoding Model/TopologyRecognition CA (%)FPGA PlatformSoftware Tool/LanguageSystem Performance
Corradi et al. (2021) SNN based on conversion method Optical radar 99.7 Trenz TE0820-03-4DE21FA Vivado Achieves an energy efficiency (nJ/SO) 0,151
Panchapakesan et al. (2021) SNN based on conversion method (VGG-13) CIFAR-10 90.79 SVHN 96 FPGA Xilinx ZCU102 Vivado HLS Achieves 13,086 frames per second under 200 MHz
Aung et al. (2021) Spiking CNN (AlexNet) CIFAR-10 81.8 SVHN 93.1 Xilinx Virtex UltraScale$+$ FPGA VCU118 N/A Achieves 28.3 kFPS running at 425 MHz
Nallathambi and Chandrachoodan (2020) Spiking CNN based on conversion method CIFAR10 76.87 Intel Cyclone V N/A Reduces the number of off-chip memory accesses by close to 90%
Hong et al. (2020) Time-delay neural net (TDNN) CIFAR-10 83.43 I Kintex7 325T Verilog Consumes 4.92 W at 160 MHz clock frequency
Fang et al. (2020) SNN based on conversion rule/LIF neuron/population coding/784 $×$ 600 $×$ 10 MNIST 97.7 Cyclone V N/A Obtains 10$×$ speedup and 196$×$ improvement in energy efficiency compared with GPU
Han (2020) A feedforward SNN based on a hybrid of the time-stepped and event-driven updating algorithms/LIF neuron/Poisson encoding 784-1200-1200-10 MNIST 97.06 Xilinx ZC706 N/A Achieves 161 FPS under 200 MHz clock frequency, and very low power consumption of 0.477 W
ReferenceNetwork Type/Neuron and Encoding Model/TopologyRecognition CA (%)FPGA PlatformSoftware Tool/LanguageSystem Performance
Corradi et al. (2021) SNN based on conversion method Optical radar 99.7 Trenz TE0820-03-4DE21FA Vivado Achieves an energy efficiency (nJ/SO) 0,151
Panchapakesan et al. (2021) SNN based on conversion method (VGG-13) CIFAR-10 90.79 SVHN 96 FPGA Xilinx ZCU102 Vivado HLS Achieves 13,086 frames per second under 200 MHz
Aung et al. (2021) Spiking CNN (AlexNet) CIFAR-10 81.8 SVHN 93.1 Xilinx Virtex UltraScale$+$ FPGA VCU118 N/A Achieves 28.3 kFPS running at 425 MHz
Nallathambi and Chandrachoodan (2020) Spiking CNN based on conversion method CIFAR10 76.87 Intel Cyclone V N/A Reduces the number of off-chip memory accesses by close to 90%
Hong et al. (2020) Time-delay neural net (TDNN) CIFAR-10 83.43 I Kintex7 325T Verilog Consumes 4.92 W at 160 MHz clock frequency
Fang et al. (2020) SNN based on conversion rule/LIF neuron/population coding/784 $×$ 600 $×$ 10 MNIST 97.7 Cyclone V N/A Obtains 10$×$ speedup and 196$×$ improvement in energy efficiency compared with GPU
Han (2020) A feedforward SNN based on a hybrid of the time-stepped and event-driven updating algorithms/LIF neuron/Poisson encoding 784-1200-1200-10 MNIST 97.06 Xilinx ZC706 N/A Achieves 161 FPS under 200 MHz clock frequency, and very low power consumption of 0.477 W
Table 6:

Continued.

ReferenceNetwork Type/Neuron and Encoding Model/TopologyRecognition CA (%)FPGA PlatformSoftware Tool/LanguageSystem Performance
Ju et al. (2020) Deep SNN based on conversion rule/IF neuron/fixed uniform encoding/28$×$28-64c5-2s-64c5-2s-128f-10o MNIST 98.94 Xilinx Zynq ZCU102 N/A Achieves 164 FPS under 150 MHz clock frequency and obtains 41$×$ speed-up and 4.6 W power consumption.
Liu et al. (2019) Liquid state machine (LSM) based on spike-timing-dependent-plasticity (STDP) TI46 speech corpus 95 Xilinx Zync ZC-706 N/A Consumes 237 mW at 100 MHz clock frequency
Losh and Llamocca (2019) Spiking hybrid network (SHiNe) based on backpropagation learning. Integrate, rectification, and fire neuron/fixed-frequency duty-cycle encoding 196-64-10 MNIST 97.70 Xilinx Zynq XC7Z010-1CLG400C Vivado 2016.3 Results in 65.536 $μ$s per frame under 125 MHz clock rate and total 161 mW power consumption.
Guo (2019) DNN to SNN conversion rule IF neuron/Poisson encoding 28$×$28-12c5-2s-64c5-2s-10 (CNN) 784-1200-1200-10 (FCN) MNIST 98.98 98.84 Xilinx V7 690T Vivado 2016.4 Consumes 0.745 W at 100 MHz clock frequency, using 32-bit fixed-point precision
Kuang et al. (2019) Three-layer SNN based on STDP learning rules/LIF neuron and conductance-based synapse. MNIST 93 Stratix III Verilog HDL N/A
Abderrahmane and Miramond (2019) SNN based on backpropagation learning IF neuron/rate coding I784-300-300-10 MNIST 97.70 Intel Cyclone V Quartus Prime Lite 18.10 VHDL Achieves 256 FPS under 50 MHz clock frequency with time-multiplexed architecture and 70 K FPS with fully parallel architecture
ReferenceNetwork Type/Neuron and Encoding Model/TopologyRecognition CA (%)FPGA PlatformSoftware Tool/LanguageSystem Performance
Ju et al. (2020) Deep SNN based on conversion rule/IF neuron/fixed uniform encoding/28$×$28-64c5-2s-64c5-2s-128f-10o MNIST 98.94 Xilinx Zynq ZCU102 N/A Achieves 164 FPS under 150 MHz clock frequency and obtains 41$×$ speed-up and 4.6 W power consumption.
Liu et al. (2019) Liquid state machine (LSM) based on spike-timing-dependent-plasticity (STDP) TI46 speech corpus 95 Xilinx Zync ZC-706 N/A Consumes 237 mW at 100 MHz clock frequency
Losh and Llamocca (2019) Spiking hybrid network (SHiNe) based on backpropagation learning. Integrate, rectification, and fire neuron/fixed-frequency duty-cycle encoding 196-64-10 MNIST 97.70 Xilinx Zynq XC7Z010-1CLG400C Vivado 2016.3 Results in 65.536 $μ$s per frame under 125 MHz clock rate and total 161 mW power consumption.
Guo (2019) DNN to SNN conversion rule IF neuron/Poisson encoding 28$×$28-12c5-2s-64c5-2s-10 (CNN) 784-1200-1200-10 (FCN) MNIST 98.98 98.84 Xilinx V7 690T Vivado 2016.4 Consumes 0.745 W at 100 MHz clock frequency, using 32-bit fixed-point precision
Kuang et al. (2019) Three-layer SNN based on STDP learning rules/LIF neuron and conductance-based synapse. MNIST 93 Stratix III Verilog HDL N/A
Abderrahmane and Miramond (2019) SNN based on backpropagation learning IF neuron/rate coding I784-300-300-10 MNIST 97.70 Intel Cyclone V Quartus Prime Lite 18.10 VHDL Achieves 256 FPS under 50 MHz clock frequency with time-multiplexed architecture and 70 K FPS with fully parallel architecture
Table 6:

Continued.

ReferenceNetwork Type/Neuron and Encoding Model/TopologyRecognition CA (%)FPGA PlatformSoftware Tool/LanguageSystem Performance
Zhang et al. (2019) SNN based on the backpropagation algorithm LIF neuron 256-256-10 MNIST 96.26 Terasic DE2-115 Quartus II Verilog Obtains 10.7$×$ speedup and 293 MW power at 100 MHz
Liu et al. (2018) liquid state machine (LSM) based on spike-timing-dependent-plasticity (STDP) TI46 speech corpus 93.1 CityScape 97.9 Xilinx Virtex-6 N/A It is up to 29% more energy efficient for training and 30% more energy efficient for classifying than the baseline.
Yousefzadeh (2017) Two-layer hybrid neural network LIF neuron/Poisson encoding E-MNIST 97.09% Xilinx SPARTAN-6 HDL Achieves 58 K FPS under 220 MHz clock frequency and consumes 363 MW, which is equal to less than 7 uJ for each frame
Mostafa et al. (2017) A feedforward SNN trained using backpropagation. LIF neuron/temporal encoding 748-600-10 MNIST 96.98 Xilinx Spartan6-LX150 N/A N/A
Chung et al. (2015) Time-delay neural network MNIST 97.64 Xilinx Artix 7 Vivado Processes an input image in 156.8 $μ$s under 160 MHz
Neil and Liu (2014) Spiking deep belief network LIF neuron 784-500-500-10 MNIST 92 Xilinx Spartan6-LX150 RTL Achieves 152 ms processing time/image and 1.5 W power consumption at 75 MHz clock frequency
ReferenceNetwork Type/Neuron and Encoding Model/TopologyRecognition CA (%)FPGA PlatformSoftware Tool/LanguageSystem Performance
Zhang et al. (2019) SNN based on the backpropagation algorithm LIF neuron 256-256-10 MNIST 96.26 Terasic DE2-115 Quartus II Verilog Obtains 10.7$×$ speedup and 293 MW power at 100 MHz
Liu et al. (2018) liquid state machine (LSM) based on spike-timing-dependent-plasticity (STDP) TI46 speech corpus 93.1 CityScape 97.9 Xilinx Virtex-6 N/A It is up to 29% more energy efficient for training and 30% more energy efficient for classifying than the baseline.
Yousefzadeh (2017) Two-layer hybrid neural network LIF neuron/Poisson encoding E-MNIST 97.09% Xilinx SPARTAN-6 HDL Achieves 58 K FPS under 220 MHz clock frequency and consumes 363 MW, which is equal to less than 7 uJ for each frame
Mostafa et al. (2017) A feedforward SNN trained using backpropagation. LIF neuron/temporal encoding 748-600-10 MNIST 96.98 Xilinx Spartan6-LX150 N/A N/A
Chung et al. (2015) Time-delay neural network MNIST 97.64 Xilinx Artix 7 Vivado Processes an input image in 156.8 $μ$s under 160 MHz
Neil and Liu (2014) Spiking deep belief network LIF neuron 784-500-500-10 MNIST 92 Xilinx Spartan6-LX150 RTL Achieves 152 ms processing time/image and 1.5 W power consumption at 75 MHz clock frequency

Close Modal