Object manipulation is essential to build the surrounding reality, and affordances—the action possibilities offered by the environment—have a crucial role in human--tool interaction. Due to the exponential growth of the metaverse, a research question arises: Does the theoretical model behind the human--tool interaction also work in artificial reality? The present study aimed to investigate the difference in the sense of embodiment in human--tool interaction between usual and unusual objects in an immersive 360-degree video. The environment is a recording of a human arm that interacts with various tools on a table. Forty-four participants took part in the study, and they were randomized into two groups, usual or unusual objects, and in two within-participants conditions, reach to move or reach to use. Results showed no significant difference in the embodiment between usual and unusual objects, demonstrating that the ventral and dorsal streams may perfectly integrate information in the artificial environment as in the real world. Participants felt present in the virtual environment, as demonstrated by the factor location of embodiment, so they believed they could interact with any tools, independently of their affordances. The study contributes to understanding the mechanisms behind human--tool interaction in the artificial environment.