Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
Subhrajit Roy
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2018) 30 (3): 723–760.
Published: 01 March 2018
FIGURES
| View All (18)
Abstract
View article
PDF
We present a neuromorphic current mode implementation of a spiking neural classifier with lumped square law dendritic nonlinearity. It has been shown previously in software simulations that such a system with binary synapses can be trained with structural plasticity algorithms to achieve comparable classification accuracy with fewer synaptic resources than conventional algorithms. We show that even in real analog systems with manufacturing imperfections (CV of 23.5% and 14.4% for dendritic branch gains and leaks respectively), this network is able to produce comparable results with fewer synaptic resources. The chip fabricated in m complementary metal oxide semiconductor has eight dendrites per cell and uses two opposing cells per class to cancel common-mode inputs. The chip can operate down to a V and dissipates 19 nW of static power per neuronal cell and 125 pJ/spike. For two-class classification problems of high-dimensional rate encoded binary patterns, the hardware achieves comparable performance as software implementation of the same with only about a 0.5% reduction in accuracy. On two UCI data sets, the IC integrated circuit has classification accuracy comparable to standard machine learners like support vector machines and extreme learning machines while using two to five times binary synapses. We also show that the system can operate on mean rate encoded spike patterns, as well as short bursts of spikes. To the best of our knowledge, this is the first attempt in hardware to perform classification exploiting dendritic properties and binary synapses.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2016) 28 (11): 2557–2584.
Published: 01 November 2016
FIGURES
| View All (78)
Abstract
View article
PDF
In this letter, we propose a novel neuro-inspired low-resolution online unsupervised learning rule to train the reservoir or liquid of liquid state machines. The liquid is a sparsely interconnected huge recurrent network of spiking neurons. The proposed learning rule is inspired from structural plasticity and trains the liquid through formating and eliminating synaptic connections. Hence, the learning involves rewiring of the reservoir connections similar to structural plasticity observed in biological neural networks. The network connections can be stored as a connection matrix and updated in memory by using address event representation (AER) protocols, which are generally employed in neuromorphic systems. On investigating the pairwise separation property, we find that trained liquids provide 1.36 0.18 times more interclass separation while retaining similar intraclass separation as compared to random liquids. Moreover, analysis of the linear separation property reveals that trained liquids are 2.05 0.27 times better than random liquids. Furthermore, we show that our liquids are able to retain the generalization ability and generality of random liquids. A memory analysis shows that trained liquids have 83.67 5.79 ms longer fading memory than random liquids, which have shown 92.8 5.03 ms fading memory for a particular type of spike train inputs. We also throw some light on the dynamics of the evolution of recurrent connections within the liquid. Moreover, compared to separation-driven synaptic modification', a recently proposed algorithm for iteratively refining reservoirs, our learning rule provides 9.30%, 15.21%, and 12.52% more liquid separations and 2.8%, 9.1%, and 7.9% better classification accuracies for 4, 8, and 12 class pattern recognition tasks, respectively.