For any discrete-state sequence prediction algorithm A, it is always possible, using an algorithm B no more complicated than A, to generate a sequence for which A's prediction is always wrong. For any prediction algorithm A and sequence x, there exists a sequence y no more complicated than x, such that if A performs better than random on x, then it will perform worse than random on y by the same margin. An example of a simple neural network predicting a bit sequence is used to illustrate this very general but not widely recognized phenomenon. This implies that any predictor with good performance must rely on some (usually implicitly) assumed prior distributions of the problem.

This content is only available as a PDF.
You do not currently have access to this content.