Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-1 of 1
Luke Theogarajan
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2018) 30 (3): 761–791.
Published: 01 March 2018
FIGURES
| View All (11)
Abstract
View article
PDF
In this letter, we have implemented and compared two neural coding algorithms in the networks of spiking neurons: Winner-takes-all (WTA) and winners-share-all (WSA). Winners-Share-All exploits the code space provided by the temporal code by training a different combination of out of neurons to fire together in response to different patterns, while WTA uses a one-hot-coding to respond to distinguished patterns. Using WSA, the maximum value of in order to maximize information capacity using output neurons was theoretically determined and utilized. A small proof-of-concept classification problem was applied to a spiking neural network using both algorithms to classify 14 letters of English alphabet with an image size of 15 15 pixels. For both schemes, a modified spike-timing-dependent-plasticity (STDP) learning rule has been used to train the spiking neurons in an unsupervised fashion. The performance and the number of neurons required to perform this computation are compared between the two algorithms. We show that by tolerating a small drop in performance accuracy (84% in WSA versus 91% in WTA), we are able to reduce the number of output neurons by more than a factor of two. We show how the reduction in the number of neurons will increase as the number of patterns increases. The reduction in the number of output neurons would then proportionally reduce the number of training parameters, which requires less memory and hence speeds up the computation, and in the case of neuromorphic implementation on silicon, would take up much less area.