Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-2 of 2
Léon Personnaz
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2004) 16 (2): 401–418.
Published: 01 February 2004
Abstract
View article
PDF
Our aim is to stress the importance of Jacobian matrix conditioning for model validation. We also comment on Monari and Dreyfus (2002), where, following Rivals and Personnaz (2000), it is proposed to discard neural candidates that are likely to overfit and/or for which quantities of interest such as confidence intervals cannot be computed accurately. In Rivals and Personnaz (2000), we argued that such models are to be discarded on the basis of the condition number of their Jacobian matrix. But Monari and Dreyfus (2002) suggest making the decision on the basis of the computed values of the leverages, the diagonal elements of the projection matrix on the range of the Jacobian, or “hat” matrix: they propose to discard a model if computed leverages are outside some theoretical bounds, pretending that it is the symptom of the Jacobian rank deficiency. We question this proposition because, theoretically, the hat matrix is defined whatever the rank of the Jacobian and because, in practice, the computed leverages of very ill-conditioned networks may respect their theoretical bounds while confidence intervals cannot be estimated accurately enough, two facts that have escaped Monari and Dreyfus's attention. We also recall the most accurate way to estimate the leverages and the properties of these estimations. Finally, we make an additional comment concerning the performance estimation in Monari and Dreyfus (2002).
Journal Articles
Publisher: Journals Gateway
Neural Computation (1999) 11 (4): 863–870.
Published: 15 May 1999
Abstract
View article
PDF
In response to Zhu and Rower (1996), a recent communication (Goutte, 1997) established that leave-one-out cross validation is not subject to the “no-free-lunch” criticism. Despite this optimistic conclusion, we show here that cross validation has very poor performances for the selection of linear models as compared to classic statistical tests. We conclude that the statistical tests are preferable to cross validation for linear as well as for nonlinear model selection.