Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
Mike Casey
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (1998) 10 (5): 1067–1069.
Published: 01 July 1998
Abstract
View articletitled, Correction to Proof That Recurrent Neural Networks Can Robustly Recognize Only Regular Languages
View
PDF
for article titled, Correction to Proof That Recurrent Neural Networks Can Robustly Recognize Only Regular Languages
Our earlier article, “The Dynamics of Discrete-Time Computation, with Application to Recurrent Neural Networks and Finite State Machine Extraction” (Casey, 1996), contains a corollary that shows that finite dimensional recurrent neural networks with noise in their state variables that perform algorithmic computations can perform only finite state machine computations. The proof of the corollary is technically incorrect. The problem resulted from the fact that the proof of the theorem on which the corollary is based was more general than the statement of the theorem, and it was the contents of the proof rather than the statement that were used to prove the corollary. In this note, we state the theorem in the necessary generality and then give the corrected proof of the corollary.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (6): 1135–1178.
Published: 01 August 1996
Abstract
View articletitled, The Dynamics of Discrete-Time Computation, with Application to Recurrent Neural Networks and Finite State Machine Extraction
View
PDF
for article titled, The Dynamics of Discrete-Time Computation, with Application to Recurrent Neural Networks and Finite State Machine Extraction
Recurrent neural networks (RNNs) can learn to perform finite state computations. It is shown that an RNN performing a finite state computation must organize its state space to mimic the states in the minimal deterministic finite state machine that can perform that computation, and a precise description of the attractor structure of such systems is given. This knowledge effectively predicts activation space dynamics, which allows one to understand RNN computation dynamics in spite of complexity in activation dynamics. This theory provides a theoretical framework for understanding finite state machine (FSM) extraction techniques and can be used to improve training methods for RNNs performing FSM computations. This provides an example of a successful approach to understanding a general class of complex systems that has not been explicitly designed, e.g., systems that have evolved or learned their internal structure.