We present a novel artificial cognitive map system using the generative deep neural networks called Variational Autoencoder / Generative Adversarial Network (VAE/GAN), which encodes input images into the latent space and the structure of the latent space is self-organized through the learning. Our results show that the distance of the predicted image is reflected in the distance of the corresponding latent vector after training, which indicates that the latent space is organized to reflect the proximity structure of the dataset. This system is also able to internally generate temporal sequences analogous to hippocampal replay/pre-play, and we found that these sequences are not just the exact replay of the past experience, and this could be the origin of creating novel sequences from the past experiences. Having this generative nature of cognition is thought as a prerequisite for artificial cognitive systems.

This content is only available as a PDF.
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.