It has long been known that macaque inferior temporal (IT) neurons tend to fire more strongly to some shapes than to others, and that different IT neurons can show markedly different shape preferences. Beyond the discovery that these preferences can be elicited by features of moderate complexity, no general principle of (nonface) object recognition had emerged by which this enormous variation in selectivity could be understood. Psychophysical, as well as computational work, suggests that one such principle is the difference between viewpoint-invariant, nonaccidental (NAP) and view-dependent, metric shape properties (MPs). We measured the responses of single IT neurons to objects differing in either a NAP (namely, a change in a geon) or an MP of a single part, shown at two orientations in depth. The cells were more sensitive to changes in NAPs than in MPs, even though the image variation (as assessed by wavelet-like measures) produced by the former were smaller than the latter. The magnitude of the response modulation from the rotation itself was, on average, similar to that produced by the NAP differences, although the image changes from the rotation were much greater than that produced by NAP differences. Multidimensional scaling of the neural responses indicated a NAP/MP dimension, independent of an orientation dimension. The present results thus demonstrate that a significant portion of the neural code of IT cells represents differences in NAPs rather than MPs. This code may enable immediate recognition of novel objects at new views.