Abstract letter identities (ALIs) are an early representation in visual word recognition that are specific to written language. They do not reflect visual or phonological features, but rather encode the identities of letters independent of case, font, sound, and so forth. How could the visual system come to develop such a representation? We propose that because many letters look similar regardless of case, font, and other characteristics, these provide common contexts for visually dissimilar uppercase and lowercase forms of other letters (e.g., e between k and y in key and E in the visually similar context K-Y). Assuming that the distribution of words' relative frequencies is comparable in upper and lowercase (that just as key is more frequent than pew, KEY is more frequent than PEW), these common contexts will also be similarly distributed in the two cases. We show how this statistical regularity could lead Hebbian learning to produce ALIs in a competitive architecture. We present a self-organizing artificial neural network that illustrates this idea and produces ALIs when presented with the most frequent words from a beginning reading corpus, as well as with artificial input.

This content is only available as a PDF.
You do not currently have access to this content.