Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-2 of 2
Elko B. Tchernev
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (9): 1911–1920.
Published: 01 September 2005
Abstract
View article
PDF
It was shown in Phatak, Choi, and Koren (1993) that the neural network implementation of the n-k-n encoder-decoder has a minimal size of k = 2, and it was shown how to construct the network. A proof was given in Phatak and Koren (1995) that in order to achieve perfect fault tolerance by replicating the hidden layer, the required number of replications is at least three. In this note, exact lower bounds are derived for the number of replications as a function of n , for the n-2-n network and for the n- log 2 n-n network.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (7): 1646–1664.
Published: 01 July 2005
Abstract
View article
PDF
Particular levels of partial fault tolerance (PFT) in feedforward artificial neural networks of a given size can be obtained by redundancy (replicating a smaller normally trained network), by design (training specifically to increase PFT), and by a combination of the two (replicating a smaller PFT-trained network). This letter investigates the method of achieving the highest PFT per network size (total number of units and connections) for classification problems. It concludes that for nontoy problems, there exists a normally trained network of optimal size that produces the smallest fully fault-tolerant network when replicated. In addition, it shows that for particular network sizes, the best level of PFT is achieved by training a network of that size for fault tolerance. The results and discussion demonstrate how the outcome depends on the levels of saturation of the network nodes when classifying data points. With simple training tasks, where the complexity of the problem and the size of the network are well within the ability of the training method, the hidden-layer nodes operate close to their saturation points, and classification is clean. Under such circumstances, replicating the smallest normally trained correct network yields the highest PFT for any given network size. For hard training tasks (difficult classification problems or network sizes close to the minimum), normal training obtains networks that do not operate close to their saturation points, and outputs are not as close to their targets. In this case, training a larger network for fault tolerance yields better PFT than replicating a smaller, normally trained network. However, since fault-tolerant training on its own produces networks that operate closer to their linear areas than normal training, replicating normally trained networks ultimately leads to better PFT than replicating fault-tolerant networks of the same initial size.