Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-3 of 3
Ah Chung Tsoi
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (1998) 10 (1): 165–188.
Published: 01 January 1998
Abstract
View article
PDF
The problem of high sensitivity in modeling is well known. Small perturbations in the model parameters may result in large, undesired changes in the model behavior. A number of authors have considered the issue of sensitivity in feedforward neural networks from a probabilistic perspective. Less attention has been given to such issues in recurrent neural networks. In this article, we present a new recurrent neural network architecture, that is capable of significantly improved parameter sensitivity properties compared to existing recurrent neural networks. The new recurrent neural network generalizes previous architectures by employing alternative discrete-time operators in place of the shift operator normally used. An analysis of the model demonstrates the existence of parameter sensitivity in recurrent neural networks and supports the proposed architecture. The new architecture performs significantly better than previous recurrent neural networks, as shown by a series of simple numerical experiments.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1993) 5 (3): 456–462.
Published: 01 May 1993
Abstract
View article
PDF
A network architecture with a global feedforward local recurrent construction was presented recently as a new means of modeling nonlinear dynamic time series (Back and Tsoi 1991a). The training rule used was based on minimizing the least mean square (LMS) error and performed well, although the amount of memory required for large networks may become significant if a large number of feedback connections are used. In this note, a modified training algorithm based on a technique for linear filters is presented, simplifying the gradient calculations significantly. The memory requirements are reduced from O [ n a ( n a + n b ) N s ] to O [(2 n a + n b ) N s ], where n a is the number of feedback delays, and N s is the total number of synapses. The new algorithm reduces the number of multiply-adds needed to train each synapse by n a at each time step. Simulations indicate that the algorithm has almost identical performance to the previous one.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1992) 4 (6): 922–931.
Published: 01 November 1992
Abstract
View article
PDF
Time-series modeling is a topic of growing interest in neural network research. Various methods have been proposed for extending the nonlinear approximation capabilities to time-series modeling problems. A multilayer perceptron (MLP) with a global-feedforward local-recurrent structure was recently introduced as a new approach to modeling dynamic systems. The network uses adaptive infinite impulse response (IIR) synapses (it is thus termed an IIR MLP), and was shown to have good modeling performance. One problem with linear IIR filters is that the rate of convergence depends on the covariance matrix of the input data. This extends to the IIR MLP: it learns well for white input signals, but converges more slowly with nonwhite inputs. To solve this problem, the adaptive lattice multilayer perceptron (AL MLP), is introduced. The network structure performs Gram-Schmidt orthogonalization on the input data to each synapse. The method is based on the same principles as the Gram-Schmidt neural net proposed by Orfanidis (1990b), but instead of using a network layer for the orthogonalization, each synapse comprises an adaptive lattice filter. A learning algorithm is derived for the network that minimizes a mean square error criterion. Simulations are presented to show that the network architecture significantly improves the learning rate when correlated input signals are present.