Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
Huaiyu Zhu
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (1998) 10 (8): 2219–2230.
Published: 15 November 1998
Abstract
View article
PDF
For any discrete-state sequence prediction algorithm A , it is always possible, using an algorithm B no more complicated than A , to generate a sequence for which A 's prediction is always wrong. For any prediction algorithm A and sequence x , there exists a sequence y no more complicated than x , such that if A performs better than random on x , then it will perform worse than random on y by the same margin. An example of a simple neural network predicting a bit sequence is used to illustrate this very general but not widely recognized phenomenon. This implies that any predictor with good performance must rely on some (usually implicitly) assumed prior distributions of the problem.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (7): 1421–1426.
Published: 01 October 1996
Abstract
View article
PDF
It is known theoretically that an algorithm cannot be good for an arbitrary prior. We show that in practical terms this also applies to the technique of “cross-validation,” which has been widely regarded as defying this general rule. Numerical examples are analyzed in detail. Their implications to researches on learning algorithms are discussed.