Simple probabilistic reinforcement learning is recognized as a striatum-based learning system, but in recent years, has also been associated with hippocampal involvement. This study examined whether such involvement may be attributed to observation-based learning (OL) processes, running in parallel to striatum-based reinforcement learning. A computational model of OL, mirroring classic models of reinforcement-based learning (RL), was constructed and applied to the neuroimaging data set of Palombo, Hayes, Reid, and Verfaellie [2019. Hippocampal contributions to value-based learning: Converging evidence from fMRI and amnesia. Cognitive, Affective & Behavioral Neuroscience, 19(3), 523–536]. Results suggested that OL processes may indeed take place concomitantly to reinforcement learning and involve activation of the hippocampus and central orbitofrontal cortex. However, rather than independent mechanisms running in parallel, the brain correlates of the OL and RL prediction errors indicated collaboration between systems, with direct implication of the hippocampus in computations of the discrepancy between the expected and actual reinforcing values of actions. These findings are consistent with previous accounts of a role for the hippocampus in encoding the strength of observed stimulus–outcome associations, with updating of such associations through striatal reinforcement-based computations. In addition, enhanced negative RL prediction error signaling was found in the anterior insula with greater use of OL over RL processes. This result may suggest an additional mode of collaboration between the OL and RL systems, implicating the error monitoring network.