The Inconsistency Detection Learner (IDL) is an algorithm for language learning that addresses the problem of structural ambiguity.If an overt form is structurally ambiguous, the learner must be capable of inferring which interpretation of the overt form is correct by reference to other overt data of the language.The IDL does this by attempting to construct grammars for combinations of interpretations of the overt forms, and discarding those combinations that are inconsistent. The potential of this algorithm for overcoming the combinatorial growth in combinations of interpretations is supported by computational results from an implementation of the IDL using an optimality-theoretic system of metrical stress grammars.
In this article we show how Optimality Theory yields a highly general Constraint Demotion principle for grammar learning. The resulting learning procedure specifically exploits the grammatical structure of Optimality Theory, independent of the content of substantive constraints defining any given grammatical module. We decompose the learning problem and present formal results for a central subproblem, deducing the constraint ranking particular to a target language, given structural descriptions of positive examples. The structure imposed on the space of possible grammars by Optimality Theory allows efficient convergence to a correct grammar. We discuss implications for learning from overt data only, as well as other learning issues. We argue that Optimality Theory promotes confluence of the demands of more effective learnability and deeper linguistic explanation.