The human capacity for visual categorization is core to how we make sense of the visible world. Although a substantive body of research in cognitive neuroscience has localized this capacity to regions of human visual cortex, relatively few studies have investigated the role of abstraction in how representations for novel object categories are constructed from the neural representation of stimulus dimensions. Using human fMRI coupled with formal modeling of observer behavior, we assess a wide range of categorization models that vary in their level of abstraction from collections of subprototypes to representations of individual exemplars. The category learning tasks range from simple linear and unidimensional category rules to complex crisscross rules that require a nonlinear combination of multiple dimensions. We show that models based on neural responses in primary visual cortex favor a variable, but often limited, extent of abstraction in the construction of representations for novel categories, which differ in degree across tasks and individuals.