Abstract
Several of the major classes of artificial neural networks' output functions are linear combinations of elements of approximately flat sets. This gives a tool for understanding the precision problem as well as providing a rationale for mixing types of networks. Approximate flatness also helps explain the power of artificial neural network techniques relative to series regressions—series regressions take linear combinations of flat sets, while neural networks take linear combinations of the much larger class of approximately flat sets.
This content is only available as a PDF.
© 1995 Massachusetts Institute of Technology
1995
You do not currently have access to this content.