TAG (Training by Adaptive Gain) is a new adaptive learning algorithm developed for optical implementation of large-scale artificial neural networks. For fully interconnected single-layer neural networks with N input and M output neurons TAG contains two different types of interconnections, i.e., M N global fixed interconnections and N + M adaptive gain controls. For two-dimensional input patterns the former may be achieved by multifacet holograms, and the latter by spatial light modulators (SLMs). For the same number of input and output neurons TAG requires much less adaptive elements, and provides a possibility for large-scale optical implementation at some sacrifice in performance as compared to the perceptron. The training algorithm is based on gradient descent and error backpropagation, and is easily extensible to multilayer architecture. Computer simulation demonstrates reasonable performance of TAG compared to perceptron performance. An electrooptical implementation of TAG is also proposed.

This content is only available as a PDF.
You do not currently have access to this content.