Speakers learn detailed generalizations about the morphophonology of their language and extend them to nonce words. We propose a theory of this morphophonological knowledge that partitions the lexicon into uniform and productive sublexicons. Each sublexicon has its own phonotactic grammar, which the speaker uses as an inference mechanism to determine the relative productivity of each sublexicon. We report the results of an experiment on the generalization of mid vowel deletion (‘‘yer’’ deletion) in Russian, showing that speakers encode source-oriented generalizations about the shapes of words that can undergo vowel deletion, as well as product-oriented generalizations about words that result from vowel deletion. An implementation of our model learns the patterns of deletion and captures both source-oriented and product-oriented generalizations.

You do not currently have access to this content.