Recurrent neural networks with fixed weights have been shown in practice to successfully classify adaptively signals that vary as a function of time in the presence of additive noise and parametric perturbations. We address the question: Can this ability be explained theoretically? We provide a mathematical proof that these networks have this ability even when parametric perturbations enter the signals nonlinearly. The restrictions that we impose on the signals to be classified are that they satisfy an assumption of nondegeneracy and that noise amplitude is sufficiently small. Further, we demonstrate that the recurrent neural networks may not only classify uncertain signals adaptively but also can recover the values of uncertain parameters of the signals, up to their equivalence classes.

This content is only available as a PDF.
You do not currently have access to this content.