The space around our body, the so-called “peripersonal space,” is where interactions with nearby objects may occur. “Defensive space” and “Reaching space” respectively refer to two opposite poles of interaction between our body and the external environment: protecting the body and performing a goal-directed action. Here, we hypothesized that mechanisms underlying these two action spaces are differentially modulated by the valence of visual stimuli, as stimuli with negative valence are more likely to activate protective actions whereas stimuli with positive valence may activate approaching actions. To test whether such distinction in cognitive/evaluative processing exists between Reaching and Defensive spaces, we measured behavioral responses as well as neural activations over sensorimotor cortex using EEG while participants performed several tasks designed to tap into mechanisms underlying either Defensive (e.g., respond to touch) or Reaching space (e.g., estimate whether object is within reaching distance). During each task, pictures of objects with either positive or negative valence were presented at different distances from the participants' body. We found that Defensive space was smaller for positively compared with negatively valenced visual stimuli. Furthermore, sensorimotor cortex activation (reflected in modulation of beta power) during tactile processing was enhanced when coupled with negatively rather than positively valenced visual stimuli regarding Defensive space. On the contrary, both the EEG and behavioral measures capturing the mechanisms underlying Reaching space did not reveal any modulation by valence. Thus, although valence encoding had differential effects on Reaching and Defensive spaces, the distance of the visual stimulus modulated behavioral measures as well as activity over sensorimotor cortex (reflected in modulations of mu power) in a similar way for both types of spaces. Our results are compatible with the idea that Reaching and Defensive spaces involve the same distance-dependent neural representations of sensory input, whereas task goals and stimulus valence (i.e., contextual information) are implemented at a later processing stage and exert an influence on motor output rather than sensory/space encoding.