In this letter, we have implemented and compared two neural coding algorithms in the networks of spiking neurons: Winner-takes-all (WTA) and winners-share-all (WSA). Winners-Share-All exploits the code space provided by the temporal code by training a different combination of out of neurons to fire together in response to different patterns, while WTA uses a one-hot-coding to respond to distinguished patterns. Using WSA, the maximum value of in order to maximize information capacity using output neurons was theoretically determined and utilized. A small proof-of-concept classification problem was applied to a spiking neural network using both algorithms to classify 14 letters of English alphabet with an image size of 15 15 pixels. For both schemes, a modified spike-timing-dependent-plasticity (STDP) learning rule has been used to train the spiking neurons in an unsupervised fashion. The performance and the number of neurons required to perform this computation are compared between the two algorithms. We show that by tolerating a small drop in performance accuracy (84% in WSA versus 91% in WTA), we are able to reduce the number of output neurons by more than a factor of two. We show how the reduction in the number of neurons will increase as the number of patterns increases. The reduction in the number of output neurons would then proportionally reduce the number of training parameters, which requires less memory and hence speeds up the computation, and in the case of neuromorphic implementation on silicon, would take up much less area.

You do not currently have access to this content.