Pollack (1991) demonstrated that second-order recurrent neural networks can act as dynamical recognizers for formal languages when trained on positive and negative examples, and observed both phase transitions in learning and interacted function system-like fractal state sets. Follow on work focused mainly on the extraction and minimization of a finite state automaton (FSA) from the trained network. However, such networks are capable of inducing languages that are not regular and therefore not equivalent to any FSA. Indeed, it may be simpler for a small network to fit its training data by inducing such a nonregular language. But when is the network's language not regular? In this article, using a low-dimensional network capable of learning all the Tomita data sets, we present an empirical method for testing whether the language induced by the network is regular. We also provide a detailed "-machine analysis of trained networks for both regular and nonregular languages.

This content is only available as a PDF.
You do not currently have access to this content.