Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-4 of 4
Giorgio Magri
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Linguistic Inquiry (2020) 51 (1): 97–123.
Published: 01 January 2020
Abstract
View article
PDF
The Calibrated Error-Driven Ranking Algorithm (CEDRA; Magri 2012 ) is shown to fail on two test cases of phonologically conditioned variation from Boersma and Hayes 2001 . The failure of the CEDRA raises a serious unsolved challenge for learnability research in stochastic Optimality Theory, because the CEDRA itself was proposed to repair a learnability problem ( Pater 2008 ) encountered by the original Gradual Learning Algorithm. This result is supported by both simulation results and a detailed analysis whereby a few constraints and a few candidates at a time are recursively “peeled off” until we are left with a “core” small enough that the behavior of the learner is easy to interpret.
Journal Articles
Publisher: Journals Gateway
Linguistic Inquiry (2018) 49 (3): 577–598.
Published: 01 July 2018
Abstract
View article
PDF
Tesar (2014) develops the notion of output-drivenness , provides guarantees that Optimality Theory grammars satisfy it, and demonstrates its learnability implications. This article discusses the extension of Tesar’s theory to a representational framework with partial phonological features. It considers a hierarchy of notions of output-drivenness of increasing strength that can be defined within this extended framework. It determines the strongest notion of output-drivenness that holds in the case of partial features. And it shows that the learnability implications discussed by Tesar carry over to a framework with partial features only if feature undefinedness is effectively treated by identity faithfulness constraints as an additional feature value.
Journal Articles
Publisher: Journals Gateway
Linguistic Inquiry (2013) 44 (4): 569–609.
Published: 01 October 2013
Abstract
View article
PDF
Various authors have recently endorsed Harmonic Grammar (HG) as a replacement for Optimality Theory (OT). One argument for this move is that OT seems not to have close correspondents within machine learning while HG allows methods and results from machine learning to be imported into computational phonology. Here, I prove that this argument in favor of HG and against OT is wrong. In fact, I show that any algorithm for HG can be turned into an algorithm for OT. Hence, HG has no computational advantages over OT. This result allows tools from machine learning to be systematically adapted to OT. As an illustration of this new toolkit for computational OT, I prove convergence for a slight variant of Boersma’s (1998) (nonstochastic) Gradual Learning Algorithm.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Linguistic Inquiry (2013) 44 (3): 434–468.
Published: 01 April 2013
FIGURES
Abstract
View article
PDF
The consistency problem models language learning as the problem of finding a grammar consistent with finite linguistic data. The subset problem refines that formulation, asking for a consistent grammar that generates a smallest language. This article reviews results concerning the tractability of the consistency problem within Optimality Theory (OT) and shows that the OT subset problem is instead intractable. The subset problem thus needs to be restricted to plausible typologies, and solution algorithms need to take advantage of the additional structure brought about by these typological restrictions. These implications are illustrated with a discussion of the choice between batch and errordriven models of the child’s acquisition of phonotactics.
Includes: Supplementary data