In the growing fervor around artificial intelligence (A.I.) old questions have resurfaced regarding its potential to achieve human-like intelligence and consciousness. A proposed path toward human-level cognition involves the development of representations in A.I. systems. This paper focuses on establishing the methods and metrics necessary toward developing and studying an A.I. that can “impute the mental states of others” (Theory of Mind). Here we examine existing psychological and robotic research on this subject, then propose an information-theoretic metric to quantify the extent to which agents have a Theory of Mind. The metric is applied to agents trained using a genetic algorithm, demonstrating that an agent-specific Theory of Mind can be achieved without the need for a general Theory of Mind. This framework lays the operational groundwork for development toward more general Theory of Mind in artificial intelligence.

This content is only available as a PDF.
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit