Anatomical evidence shows that our visual field is initially split along the vertical midline and contralaterally projected to different hemispheres. It remains unclear at which processing stage the split information converges. In the current study, we applied the Double Filtering by Frequency (DFF) theory (Ivry & Robertson, 1998) to modeling the visual field split; the theory assumes a right-hemisphere/low-frequency bias. We compared three cognitive architectures with different timings of convergence and examined their cognitive plausibility to account for the left-side bias effect in face perception observed in human data. We show that the early convergence model failed to show the left-side bias effect. The modeling, hence, suggests that the convergence may take place at an intermediate or late stage, at least after information has been extracted/encoded separately in the two hemispheres, a fact that is often overlooked in computational modeling of cognitive processes. Comparative anatomical data suggest that this separate encoding process that results in differential frequency biases in the two hemispheres may be engaged from V1 up to the level of area V3a and V4v, and converge at least after the lateral occipital region. The left-side bias effect in our model was also observed in Greeble recognition; the modeling, hence, also provides testable predictions about whether the left-side bias effect may also be observed in (expertise-level) object recognition.