Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-4 of 4
Bruce Hayes
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Linguistic Inquiry 1–14.
Published: 14 November 2023
Abstract
View article
PDF
In Noisy Harmonic Grammar (Boersma and Pater 2016), a stochastic version of Optimality Theory (Prince and Smolensky 1993), the constraints are weighted and the outcomes are probability distributions over GEN, computed by adding a noise factor to the constraint weights at each evaluation. Intuitively, one might expect that constraints bearing zero weights would have zero empirical effect, but this turns out not to be so. First, we show that a constraint with zero weight in NHG continues to affect the probability of candidates that violate it; the effect is either upward or downward, depending on otherfactors. Second, under certain arrangements intended to maintain the principle of harmonic bounding, zero-weighted constraints can force zero probability for candidates that violate them. We suggest what sort of cases linguists should seek in order to test the truth of these predictions, and also point out alternatives we might appeal to if these predictions emerge as false.
Journal Articles
Publisher: Journals Gateway
Linguistic Inquiry (2013) 44 (1): 45–75.
Published: 01 January 2013
FIGURES
Abstract
View article
PDF
We investigate whether the patterns of phonotactic well-formedness internalized by language learners are direct reflections of the phonological patterns they encounter, or reflect in addition principles of phonological naturalness. We employed the phonotactic learning system of Hayes and Wilson (2008) to search the English lexicon for phonotactic generalizations and found that it learned many constraints that are evidently unnatural, having no typological or phonetic basis. We tested 10 such constraints by obtaining native-speaker ratings of 40 nonce words: 10 violated our unnatural constraints, 10 violated natural constraints assigned comparable weights by the learner, and 20 were control forms. Violations of the natural constraints had a powerful effect on ratings, violations of the unnatural constraints at best a weak one. We assess various hypotheses intended to explain this disparity, and conclude in favor of a learning bias account.
Journal Articles
Publisher: Journals Gateway
Linguistic Inquiry (2008) 39 (3): 379–440.
Published: 01 July 2008
Abstract
View article
PDF
The study of phonotactics is a central topic in phonology. We propose a theory of phonotactic grammars and a learning algorithm that constructs such grammars from positive evidence. Our grammars consist of constraints that are assigned numerical weights according to the principle of maximum entropy. The grammars assess possible words on the basis of the weighted sum of their constraint violations. The learning algorithm yields grammars that can capture both categorical and gradient phonotactic patterns. The algorithm is not provided with constraints in advance, but uses its own resources to form constraints and weight them. A baseline model, in which Universal Grammar is reduced to a feature set and an SPE -style constraint format, suffices to learn many phonotactic phenomena. In order for the model to learn nonlocal phenomena such as stress and vowel harmony, it must be augmented with autosegmental tiers and metrical grids. Our results thus offer novel, learning-theoretic support for such representations. We apply the model in a variety of learning simulations, showing that the learned grammars capture the distributional generalizations of these languages and accurately predict the findings of a phonotactic experiment.
Journal Articles
Publisher: Journals Gateway
Linguistic Inquiry (2001) 32 (1): 45–86.
Published: 01 January 2001
Abstract
View article
PDF
The Gradual Learning Algorithm (Boersma 1997) is a constraint-ranking algorithm for learning optimality-theoretic grammars. The purpose of this article is to assess the capabilities of the Gradual Learning Algorithm, particularly in comparison with the Constraint Demotion algorithm of Tesar and Smolensky (1993, 1996, 1998, 2000), which initiated the learnability research program for Optimality Theory. We argue that the Gradual Learning Algorithm has a number of special advantages: it can learn free variation, deal effectively with noisy learning data, and account for gradient well-formedness judgments. The case studies we examine involve Ilokano reduplication and metathesis, Finnish genitive plurals, and the distribution of English light and dark /l/.