The abilities to predict, explain, and control might arise out of operations on a common underlying representation or, conversely, from independent cognitive processes. We developed a novel experimental paradigm to explore how individuals might use probabilistic mental models in these three tasks, under varying levels of complexity and uncertainty. Participants interacted with a simple chatbot defined by a finite-state machine, and were then tested on their ability to predict, explain, and control the chatbot’s responses. When full information was available, performance varied significantly across the tasks, with control proving most robust to increased complexity, and explanation being the most challenging. In the presence of hidden information, however, performance across tasks equalized, and participants demonstrated an alternative neglect bias, i.e. , a tendency to ignore less likely possibilities. A second, within-subject experimental design then looked for correlations between abilities. We did not find strong correlations, but the challenges of the task for the subjects limited our statistical power. To understand these effects better, a final experiment investigated the possibility of cross-training, skill transfer, or “zero-shot” performance: how well a participant, explicitly trained on one of the three tasks, could perform on the others without additional training. Here we found strong asymmetries: participants trained to control gained generalizable abilities to both predict and explain, while training on either prediction or explanation did not lead to transfer. This cross-training experiment also revealed correlations in performance; most notably between control and prediction. Our findings highlight the complex role of mental models, in contrast to task-specific heuristics, when information is partially hidden, and suggest new avenues for research into situations where the acquisition of general purpose mental models may provide a unifying explanation for a variety of cognitive abilities.