Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-1 of 1
Frank Keller
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Linguistic Inquiry (2002) 33 (2): 225–244.
Published: 01 April 2002
Abstract
View article
PDF
This article provides a critical assessment of the Gradual Learning Algorithm (GLA) for probabilistic optimality-theoretic (OT) grammars proposed by Boersma and Hayes (2001). We discuss the limitations of a standard algorithm for OT learning and outline how the GLA attempts to overcome these limitations. We point out a number of serious shortcomings with the GLA: (a) A methodological problem is that the GLA has not been tested on unseen data, which is standard practice in computational language learning. (b) We provide counterexamples, that is, attested data sets that the GLA is not able to learn. (c) Essential algorithmic properties of the GLA (correctness and convergence) have not been proven formally. (d) By modeling frequency distributions in the grammar, the GLA conflates the notions of competence and performance. This leads to serious conceptual problems, as OT crucially relies on the competence/performance distinction.