Judd (1988) and Blum and Rivest (1988) have recently proved that the loading problem for neural networks is NP complete. This makes it very unlikely that any algorithm like backpropagation which varies weights on a network of fixed size and topology will be able to learn in polynomial time. However, Valiant has recently proposed a learning protocol (Valiant 1984), which allows one to sensibly consider generalization by learning algorithms with the freedom to add neurons and synapses, as well as simply adjusting weights. Within this context, standard circuit complexity arguments show that learning algorithms with such freedom can solve in polynomial time any learning problem that can be solved in polynomial time by any algorithm whatever. In this sense, neural nets are universal learners, capable of learning any learnable class of concepts.