Damage to the medial temporal lobe (MTL) has long been known to impair declarative memory, and recent evidence suggests that it also impairs visual perception. A theory termed the representational-hierarchical account explains such impairments by assuming that MTL stores conjunctive representations of items and events, and that individuals with MTL damage must rely upon representations of simple visual features in posterior visual cortex, which are inadequate to support memory and perception under certain circumstances. One recent study of visual discrimination behavior revealed a surprising antiperceptual learning effect in MTL-damaged individuals: With exposure to a set of visual stimuli, discrimination performance worsened rather than improved [Barense, M. D., Groen, I. I. A., Lee, A. C. H., Yeung, L. K., Brady, S. M., Gregori, M., et al. Intact memory for irrelevant information impairs perception in amnesia. Neuron, 75, 157–167, 2012]. We extend the representational-hierarchical account to explain this paradox by assuming that difficult visual discriminations are performed by comparing the relative “representational tunedness”—or familiarity—of the to-be-discriminated items. Exposure to a set of highly similar stimuli entails repeated presentation of simple visual features, eventually rendering all feature representations maximally and, thus, equally familiar; hence, they are inutile for solving the task. Discrimination performance in patients with MTL lesions is therefore impaired by stimulus exposure. Because the unique conjunctions represented in MTL do not occur repeatedly, healthy individuals are shielded from this perceptual interference. We simulate this mechanism with a neural network previously used to explain recognition memory, thereby providing a model that accounts for both mnemonic and perceptual deficits caused by MTL damage with a unified architecture and mechanism.