Abstract
We explore a novel approach to learning underlying representations (URs) which incorporates a number of current proposals in phonological theory and computational modeling. We seek to bring our results to bear on the long-standing issue of abstractness in phonology. Our strategy is to run the same learning model on a variety of languages while systematically varying the degree of abstractness permitted, following the abstractness hierarchy set forth by Kenstowicz and Kisseberth (1977). We find that when the criterion of abstractness is permissive, the resulting large set of candidate URs can lead the learning system to fail by getting stuck in a local maximum. We invoke research suggesting that abstract systems are often mislearned by children, and identify a level of Kenstowicz and Kisseberth’s abstractness hierarchy that best predicts such restructurings.