Abstract
Several studies that deal with the acquisition of concepts in a bottom-up manner from experiences in the physical space exist, but there are few of them that deal with the bidirectional interaction between symbolic operations and experiences in the physical world. It was shown that a shared module neural network succeeded in generating a bottom-up spatial representation of the external world, without involving learning of the signals of the spatial structure. Furthermore, the module can understand the external map as a symbol based on its spatial representation, and top-down navigation can be performed using the map. In this study, we extended this model and proposed a simulation model that unifies the emergence of a number representation, learning of symbol manipulation on the representation, and top-down understanding of symbol manipulation onto the physical world. Our results show that the learning results of the symbol manipulation can be applied to the physical world prediction, and our proposed model succeeded in grounding symbol manipulation onto physical experiences.