To accurately categorize items, humans learn to selectively attend to stimulus dimensions that are most relevant to the task. Models of category learning describe the interconnected cognitive processes that contribute to attentional tuning as labeled stimuli are progressively observed. The Adaptive Attention Representation Model (AARM), for example, provides an account whereby categorization decisions are based on the perceptual similarity of a new stimulus to stored exemplars, and dimension-wise attention is updated on every trial in the direction of a feedback-based error gradient. As such, attention modulation as described by AARM requires interactions among orienting, visual perception, memory retrieval, prediction error, and goal maintenance to facilitate learning across trials. The current study explored the neural bases of attention mechanisms using quantitative predictions from AARM to analyze behavioral and fMRI data collected while participants learned novel categories. Generalized linear model analyses revealed patterns of BOLD activation in the parietal cortex (orienting), visual cortex (perception), medial temporal lobe (memory retrieval), basal ganglia (prediction error), and pFC (goal maintenance) that covaried with the magnitude of model-predicted attentional tuning. Results are consistent with AARM's specification of attention modulation as a dynamic property of distributed cognitive systems.