A matrix method is described that optimizes the set of weights and biases for the output side of a network with a single hidden layer of neurons, given any set of weights and biases for the input side of the hidden layer. All the input patterns are included in a single optimization cycle. A simple iterative minimization procedure is used to optimize the weights and biases on the input side of the hidden layer. Many test problems have been solved, confirming the validity of the method. The results suggest that for a network with a single layer of hidden sigmoidal nodes, the accuracy of a functional representation is reduced as the nonlinearity of the function increases.