Abstract
No-free-lunch theorems have shown that learning algorithms cannot be universally good. We show that no free funch exists for noise prediction as well. We show that when the noise is additive and the prior over target functions is uniform, a prior on the noise distribution cannot be updated, in the Bayesian sense, from any finite data set. We emphasize the importance of a prior over the target function in order to justify superior performance for learning systems.
Issue Section:
Letters
This content is only available as a PDF.
© 2000 Massachusetts Institute of Technology
2000
You do not currently have access to this content.