Our aim is to stress the importance of Jacobian matrix conditioning for model validation. We also comment on Monari and Dreyfus (2002), where, following Rivals and Personnaz (2000), it is proposed to discard neural candidates that are likely to overfit and/or for which quantities of interest such as confidence intervals cannot be computed accurately. In Rivals and Personnaz (2000), we argued that such models are to be discarded on the basis of the condition number of their Jacobian matrix. But Monari and Dreyfus (2002) suggest making the decision on the basis of the computed values of the leverages, the diagonal elements of the projection matrix on the range of the Jacobian, or “hat” matrix: they propose to discard a model if computed leverages are outside some theoretical bounds, pretending that it is the symptom of the Jacobian rank deficiency.

We question this proposition because, theoretically, the hat matrix is defined whatever the rank of the Jacobian and because, in practice, the computed leverages of very ill-conditioned networks may respect their theoretical bounds while confidence intervals cannot be estimated accurately enough, two facts that have escaped Monari and Dreyfus's attention. We also recall the most accurate way to estimate the leverages and the properties of these estimations. Finally, we make an additional comment concerning the performance estimation in Monari and Dreyfus (2002).

This content is only available as a PDF.