This article argues that the application of an embodied cognitive science perspective does not require us to distinguish between systems that have a physically tangible body and systems that do not. I consider the specific case of ChatGPT, a large language model specialised for interactive dialogue, and argue that ChatGPT can potentially be seen as embodied, albeit with a very unfamiliar type of embodiment.

I propose that we should explicitly distinguish between two notions of physicality: on the one hand, whether a system’s body is tangible or not (roughly, whether we imagine it as providing us tactile-kinesthetic affordances); on the other hand, whether a system is physically situated or not (i.e. whether or not it interacts physically with the rest of the Universe).

I discuss whether or not tangibility should be accorded any major theoretical weight, within cognitive science, by considering several theoretical issues relating to embodiment: six from the previous literature, and four that I raise myself.. My conclusion is that (at least in regard to these aspects of embodied cognition) there is no good theoretical reason to treat tangible bodies as a prerequisite for embodied cognition.

Hence, I argue that an interactive language model like Chat-GPT can, in principle, perceive the world and interact with it just as physically as a squid or robot does (albeit less tangibly) through text channels, which serve as its physical sensors and actuators. Whether or not we should understand it as doing so depends on its behaviour, not on its substrate.

This content is only available as a PDF.
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit