A network architecture with a global feedforward local recurrent construction was presented recently as a new means of modeling nonlinear dynamic time series (Back and Tsoi 1991a). The training rule used was based on minimizing the least mean square (LMS) error and performed well, although the amount of memory required for large networks may become significant if a large number of feedback connections are used. In this note, a modified training algorithm based on a technique for linear filters is presented, simplifying the gradient calculations significantly. The memory requirements are reduced from O[na(na + nb)Ns] to O[(2na + nb)Ns], where na is the number of feedback delays, and Ns is the total number of synapses. The new algorithm reduces the number of multiply-adds needed to train each synapse by na at each time step. Simulations indicate that the algorithm has almost identical performance to the previous one.

This content is only available as a PDF.
You do not currently have access to this content.