In this paper the topological and geometric properties of the weight solutions for multilayer perceptron (MLP) networks under the MSE error criterion are characterized. The characterization is obtained by analyzing a homotopy from linear to nonlinear networks in which the hidden node function is slowly transformed from a linear to the final sigmoidal nonlinearity. Two different geometric perspectives for this optimization process are developed. The generic topology of the nonlinear MLP weight solutions is described and related to the geometric interpretations, error surfaces, and homotopy paths, both analytically and using carefully constructed examples. These results illustrate that although the natural homotopy provides a practically valuable heuristic for training, it suffers from a number of theoretical and practical difficulties. The linear system is a bifurcation point of the homotopy equations, and solution paths are therefore generically discontinuous. Bifurcations and infinite solutions further occur for data sets that are not of measure zero. These results weaken the guarantees on global convergence and exhaustive behavior normally associated with homotopy methods. However, the analyses presented provide a clear understanding of the relationship between linear and nonlinear perceptron networks, and thus a firm foundation for development of more powerful training methods. The geometric perspectives and generic topological results describing the nature of the solutions are further generally applicable to network analysis and algorithm evaluation.

This content is only available as a PDF.
You do not currently have access to this content.