Body representations, which have multimodal receptive fields in the peripersonal space where individuals interact with the environment within their reach, show plasticity through tool use and are necessary for adaptive and skillful use of external tools. In this study, we propose a neural network model that develops a multimodal and body-centered peripersonal space representation of the plastic body representation through tool use, whereas previous developmental models can only explain the plastic body representation as a non-body-centered one. Our proposed model reconstructs visual and tactile sensations corresponding to proprioceptive sensations after integrating visual and tactile sensations through a Transformer based on a self-attention mechanism. By learning through camera vision and arm touch of a simulated robot and proprioception of camera and arm postures, a body representation was developed that localizes tactile sensations on a simultaneously developed peripersonal space representation. In particular, learning during tool use causes the body representation to have plasticity due to tool use, and the peripersonal space representation is shared by sharing part of the visual and tactile decoding modules. As a result, the model obtains the plastic body representation on the body-centered multimodal peripersonal space representation.

This content is only available as a PDF.
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.