Visit this link to see a video version of this abstract.

At this moment in technological history, it seems that AI-powered technology has the potential to evolve into almost anything within the next 20 years. While we expect machines to don various forms of intelligence, we also expect to integrate them into our daily lives in ways we haven’t yet imagined. How will their presence and capabilities affect our everyday human experience? While we’re often (rightfully) thinking about how our day-to-day lives will change, we rarely pause to consider the experience of the machines themselves. But there’s a good reason for this. What a machine “experiences” is difficult to define, much less measure. We also have difficulty understanding the concept of experience in general. We don’t fully understand the experiences of the many other living creatures who’ve shared our world for millennia. So while we cannot yet measure how models like ChatGPT[l] or Stable Diffusion[2] experience a written conversation, we may be able to experiment with different ways of translating a machine “experience” to a human one. How do current algorithms translate their inputs into an output, and what happens along the way? In this art installation, we introduce wearable technology meant to translate aspects of what a trained model allocates attention to into something a human can experience.

This content is only available as a PDF.
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.