In this paper, we investigate the problem of optimal sequential learning, viewed as a problem of estimating an underlying function sequentially rather than estimating a set of parameters of the neural network. First, we arrive at a suboptimal solution to the sequential estimate that can be mapped by a growing gaussian radial basis function (GaRBF) network. This network adds hidden units for each observation. The function space approach in which the estimates are represented as vectors in a function space is used in developing a growth criterion to limit its growth. A simplification of the criterion leads to two joint criteria on the distance of the present pattern and the existing unit centers in the input space and on the approximation error of the network for the given observation to be satisfied together. This network is similar to the resource allocating network (RAN) (Platt 1991a) and hence RAN can be interpreted from a function space approach to sequential learning. Second, we present an enhancement to the RAN. The RAN either allocates a new unit based on the novelty of an observation or adapts the network parameters by the LMS algorithm. The function space interpretation of the RAN lends itself to an enhancement of the RAN in which the extended Kalman filter (EKF) algorithm is used in place of the LMS algorithm. The performance of the RAN and the enhanced network are compared in the experimental tasks of function approximation and time-series prediction demonstrating the superior performance of the enhanced network with fewer number of hidden units. The approach adopted here has led us toward the minimal network required for a sequential learning problem.

This content is only available as a PDF.
You do not currently have access to this content.