New born humans preferentially orient to facelike patterns at birth, but months of experience with faces are required for full face processing abilities to develop. Several models have been proposed for how the interaction of genetic and evironmental influences can explain these data. These models generally assume that the brain areas responsible for newborn orienting responses are not capable of learning and are physically separate from those that later learn from real faces. However, it has been difficult to reconcile these models with recent discoveries of face learning in newborns and young infants. We propose a general mechanism by which genetically specified and environment-driven preferences can coexist in the same visual areas. In particular, newborn face orienting may be the result of prenatal exposure of a learning system to internally generated input patterns, such as those found in PGO waves during REM sleep. Simulating this process with the HLISSOM biological model of the visualsystem, we demonstrate that the combination of learning and internal patterns is an efficient way to specify and develop circuitry for face perception. This prenatal learning can account for the newborn preferences for schematic and photographic images of faces, providing a computational explanation for how genetic influences interact with experience to construct a complex adaptive system.

This content is only available as a PDF.
You do not currently have access to this content.