Representing object position is one of the most critical functions of the visual system, but this task is not as simple as reading off an object's retinal coordinates. A rich body of literature has demonstrated that the position in which we perceive an object depends not only on retinotopy but also on factors such as attention, eye movements, object and scene motion, and frames of reference, to name a few. Despite the distinction between perceived and retinal position, strikingly little is known about how or where perceived position is represented in the brain. In the present study, we dissociated retinal and perceived object position to test the relative precision of retina-centered versus percept-centered position coding in a number of independently defined visual areas. In an fMRI experiment, subjects performed a five-alternative forced-choice position discrimination task; our analysis focused on the trials in which subjects misperceived the positions of the stimuli. Using a multivariate pattern analysis to track the coupling of the BOLD response with incremental changes in physical and perceived position, we found that activity in higher level areas—middle temporal complex, fusiform face area, parahippocampal place area, lateral occipital cortex, and posterior fusiform gyrus—more precisely reflected the reported positions than the physical positions of the stimuli. In early visual areas, this preferential coding of perceived position was absent or reversed. Our results demonstrate a new kind of spatial topography present in higher level visual areas in which an object's position is encoded according to its perceived rather than retinal location. We term such percept-centered encoding “perceptotopy”.

You do not currently have access to this content.