A method is presented for learning the reciprocal feedforward and feedback connections required by the predictive coding model of cortical function. When this method is used, feedforward and feedback connections are learned simultaneously and independently in a biologically plausible manner. The performance of the proposed algorithm is evaluated by applying it to learning the elementary components of artificial and natural images. For artificial images, the bars problem is employed, and the proposed algorithm is shown to produce state-of-the-art performance on this task. For natural images, components resembling Gabor functions are learned in the first processing stage, and neurons responsive to corners are learned in the second processing stage. The properties of these learned representations are in good agreement with neurophysiological data from V1 and V2. The proposed algorithm demonstrates for the first time that a single computational theory can explain the formation of cortical RFs and also the response properties of cortical neurons once those RFs have been learned.