Most neural network applications rely on the fundamental approximation property of feedforward networks. Supervised learning is a means of implementing this approximate mapping. In a realistic problem setting, a mechanism is needed to devise this learning process based on available data, which encompasses choosing an appropriate set of parameters in order to avoid overfitting, using an efficient learning algorithm measured by computation and memory complexities, ensuring the accuracy of the training procedure as measured by the training error, and testing and cross-validation for generalization. We develop a comprehensive supervised learning algorithm to address these issues. The algorithm combines training and pruning into one procedure by utilizing a common observation of Jacobian rank deficiency in feedforward networks. The algorithm not only reduces the training time and overall complexity but also achieves training accuracy and generalization capabilities comparable to more standard approaches. Extensive simulation results are provided to demonstrate the effectiveness of the algorithm.