Analogical reasoning, e.g., inferring that teacher is to chalk as mechanic is to wrench, plays a fundamental role in human cognition. However, whether brain activity patterns of individual words are encoded in a way that could facilitate analogical reasoning is unclear. Recent advances in computational linguistics have shown that information about analogical problems can be accessed by simple addition and subtraction of word embeddings (e.g., wrench = mechanic + chalkteacher). Critically, this property emerges in artificial neural networks that were not trained to produce analogies but instead were trained to produce general-purpose semantic representations. Here, we test whether such emergent property can be observed in representations in human brains, as well as in artificial neural networks. fMRI activation patterns were recorded while participants viewed isolated words but did not perform analogical reasoning tasks. Analogy relations were constructed from word pairs that were categorically or thematically related, and we tested whether the predicted fMRI pattern calculated with simple arithmetic was more correlated with the pattern of the target word than other words. We observed that the predicted fMRI patterns contain information not only about the identity of the target word but also its category and theme (e.g., teaching-related). In summary, this study demonstrated that information about analogy questions can be reliably accessed with the addition and subtraction of fMRI patterns, and that, similar to word embeddings, this property holds for task-general patterns elicited when participants were not explicitly told to perform analogical reasoning.

This content is only available as a PDF.

Author notes

Competing Interests: The authors have declared that no competing interests exist.

This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.