Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-3 of 3
Gérard Dreyfus
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2004) 16 (2): 419–443.
Published: 01 February 2004
Abstract
View articletitled, Reply to the Comments on “Local Overfitting Control via Leverages” in “Jacobian Conditioning Analysis for Model Validation” by I. Rivals and L. Personnaz
View
PDF
for article titled, Reply to the Comments on “Local Overfitting Control via Leverages” in “Jacobian Conditioning Analysis for Model Validation” by I. Rivals and L. Personnaz
“Jacobian Conditioning Analysis for Model Validation” by Rivals and Personnaz in this issue is a comment on Monari and Dreyfus (2002). In this reply, we disprove their claims. We point to flawed reasoning in the theoretical comments and to errors and inconsistencies in the numerical examples. Our replies are substantiated by seven counterexamples, inspired by actual data, which show that the comments on the accuracy of the computation of the leverages are unsupported and that following the approach they advocate leads to discarding valid models or validating overfitted models.
Journal Articles
Local Overfitting Control via Leverages
UnavailablePublisher: Journals Gateway
Neural Computation (2002) 14 (6): 1481–1506.
Published: 01 June 2002
Abstract
View articletitled, Local Overfitting Control via Leverages
View
PDF
for article titled, Local Overfitting Control via Leverages
We present a novel approach to dealing with overfitting in black box models. It is based on the leverages of the samples, that is, on the influence that each observation has on the parameters of the model. Since overfitting is the consequence of the model specializing on specific data points during training, we present a selection method for nonlinear models based on the estimation of leverages and confidence intervals. It allows both the selection among various models of equivalent complexities corresponding to different minima of the cost function (e.g., neural nets with the same number of hidden units) and the selection among models having different complexities (e.g., neural nets with different numbers of hidden units). A complete model selection methodology is derived.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1998) 10 (1): 133–164.
Published: 01 January 1998
Abstract
View articletitled, The Canonical Form of Nonlinear Discrete-Time Models
View
PDF
for article titled, The Canonical Form of Nonlinear Discrete-Time Models
Discrete-time models of complex nonlinear processes, whether physical, biological, or economical, are usually under the form of systems of coupled difference equations. In analyzing such systems, one of the first tasks is to find a state-space description of the process—that is, a set of state variables and the associated state equations. We present a methodology for finding a set of state variables and a canonical representation of a class of systems described by a set of recurrent discrete-time, time-invariant equations. In the field of neural networks, this is of special importance since the application of standard training algorithms requires the network to be in a canonical form. Several illustrative examples are presented.