September 2023 marks the 50th anniversary of the Electronic Visualization Laboratory (EVL) at University of Illinois Chicago (UIC). EVL's introduction of the CAVE Automatic Virtual Environment in 1992, the first widely replicated, projection-based, walk-in, virtual-reality (VR) system in the world, put EVL at the forefront of collaborative, immersive data exploration and analytics. However, the journey did not begin then. Since its founding in 1973, EVL has been developing tools and techniques for real-time, interactive visualizations—pillars of VR. But EVL's culture is also relevant to its successes, as it has always been an interdisciplinary lab that fosters teamwork, where each person's expertise contributes to the development of the necessary tools, hardware, system software, applications, and human interface models to solve problems. Over the years, as multidisciplinary collaborations evolved and advanced scientific instruments and data resources were distributed globally, the need to access and share data and visualizations while working with colleagues, local and remote, synchronous and asynchronous, also became important fields of study. This paper is a retrospective of EVL's past 50 years that surveys the many networked, immersive, collaborative visualization and VR systems and applications it developed and deployed, as well as lessons learned and future plans.

1.1 The Vision

Journalist Stuart Luman (2011), in an article for UIC Alumni Magazine titled “Future World,” wrote: “A scene in Star Wars. Life-like avatars. Cave Automatic Virtual Environment. They all germinated in UIC's Electronic Visualization Lab. A place where the clock is always 15 years fast. . . . Although not as well known or well-funded as institutions such as MIT's Media Lab, EVL has a long history of radically changing the way people visualize and interact with data and technology.”

Created in 1973 by computer scientist Tom DeFanti and artist and physicist Dan Sandin as a joint program between UIC's College of Engineering/Computer Science (CS) department and the School of Art & Design, EVL was the first Science + Art program in the country leading to an MFA in Electronic Visualization or an MS or PhD in CS with a specialization in visualization. The joint degree program ended in the mid-2000s when the College of Architecture, Design and the Arts reorganized. However, EVL continued its strong working relationship with the School of Design and within the past year, UIC became the first public university in the United States to offer a Computer Science + Design (CS + DES) undergraduate degree program leading to a joint BS; the interdisciplinary program is based at EVL.

EVL's fundamental philosophy, still true today, comes from its founders. DeFanti believed in developing advanced visualization hardware and/or software systems for non-computer people, whether artists or astrophysicists. Sandin believed artists should use (or build) the most advanced technologies of their time to create their art and would often point out that Renaissance painters made their own paints. Further, DeFanti and Sandin both believed systems should be user-oriented (easy to use, easy to learn), low-cost, interactive, and real time (to provide immediate feedback). While they recognized the benefits and stunning visual quality of state-of-the-art, high-performance computer-graphics systems coming to market in the 1970s, those systems were expensive and not easily accessible to most people (DeFanti et al., 1975).

Also, given EVL's interdisciplinary collaboration focus, their goal was not to have people in each discipline learn everything about another discipline, but to learn the vocabulary of other disciplines, thereby improving communications so people could work together on joint projects. Non-computer people acquired basic programming skills. Technical people learned aesthetics. And CS and Art students learned how domain scientists approached problem solving. In the 2000s, Jason Leigh and Andy Johnson coined the term immersive empathic design, whereby EVL students got summer internships at domain scientists’ facilities to immerse themselves in scientists’ environments for on-site, hands-on experiences, observe how scientists solved problems, learn to be empathic, and determine how EVL's tools and techniques could design better systems to fit scientists’ needs and expectations (Chen, 2011; EVL, 2011b). For an example, see CoreWall (Section 4.1.2).

Today, EVL continues its focus on real-time interaction and interdisciplinary collaboration, though we use the term visual data science, which we define as real-time, interactive, multi-modal (visual, audio, haptic), networked, collaborative data visualization and immersive environments to support analytical reasoning. EVL continues to use the most advanced tools of its time, form interdisciplinary teams, and create human interface models to tackle today's grand challenges. Because EVL takes a systems approach to problem solving, it is also a technology enabler and disruptor, and has long been advancing high-performance computing and networking cyberinfrastructure.

EVL works with computer scientists, artists, designers, and domain scientists in academia, industry, government labs, and nonprofits (e.g., museums). Its inventions are open source, and some have been commercialized (Section 7.2). EVL educates industry partners on how their products can benefit the academic research market. EVL builds and supports user communities on a global scale. Most importantly, EVL amplifies its core research mission with its educational mission, applied to all workforce dimensions—education, training, advancement, and mentoring—for undergraduate and graduate students, collaborators, and the public.

1.2 Culture and Facilities

EVL is housed in the CS department. EVL's core family consists of CS visualization faculty, long-term faculty collaborators from other departments, and technical and administrative staff; they write grants to support the Lab and teach classes using the Lab's advanced technologies and inventions. However, as an interdisciplinary lab, EVL's extended family consists of faculty and student collaborators from a variety of campus units who all get keys to the Lab. All participate in weekly Tech Meetings where students give updates on their research projects. Newcomers are welcome, and those that get involved quickly become family members.

Its current location, where EVL has lived since 1991, is a multimillion-dollar facility consisting of 7,000 sq. ft. of contiguous space with large and small rooms plus an additional 3,000 sq. ft. of several large rooms located nearby. Each space has an open floor plan in which faculty, staff, students, VR/visualization systems, computers, display walls, whiteboards, desks, chairs, and tables all coexist, though an internal calendar lets people reserve a machine or a room as needed. The main lab is where new technologies, such as the CAVE and now CAVE2, live, with nearby tables and chairs for developers to sit and test their codes. Continuum, a large room with new and mature technologies, is used for meetings, classes, and art/design performances. A machine room houses high-performance computers and networks, keeping work areas quiet and cool but close by in case a machine needs rebooting. A kitchenette, reception area, office spaces, meeting rooms, rooms with desks for 12–15 graduate students, and rotating spaces for undergraduate students complete the layout.

This free form layout is deliberate and integral to EVL's culture of fostering interdisciplinary teams, open communications, meaningful collaborations, and immediate access to technology, all while preparing students for real-world experiences. This culture defines EVL as much as its pioneering work and inventions.

1.3 Leadership

September 2023 marked the 50th anniversary of EVL. Co-founder Sandin retired from UIC in 2001 and then began working part time at EVL as a Senior Research Scientist. Co-founder DeFanti retired from UIC in 2004 and became a Research Scientist at UC San Diego's California Institute for Telecommunications and Information Technology (Calit2)-Qualcomm Institute and continues to collaborate with EVL to this day. Jason Leigh became EVL director in 2004 but left UIC in 2014 to become a faculty member at University of Hawaii at Manoa, where he founded and directs the Laboratory for Advanced Visualization and Applications (LAVA) and also continues to collaborate with EVL. Maxine Brown was hired as an EVL associate director in 1986 and became EVL director in 2014 until she retired from UIC in 2020; she is now an EVL Senior Research Scientist hired back part time on grant funding. Andy Johnson is currently EVL interim director.

2.1 Visualization Systems and Applications

Founded in September 1973 and initially named the Circle Graphics Habitat, the Lab was soon rebranded EVL (see Figure 1) at the University of Illinois Circle Campus, which was subsequently renamed the University of Illinois Chicago (UIC). In 1973, Tom DeFanti, recently graduated from The Ohio State University (OSU) with a PhD in CS, was first hired by the UIC Chemistry department to develop computer-generated instructional techniques (DeFanti et al., 1975), and then joined Information Engineering (later CS).
Figure 1.

EVL faculty, students, and collaborators 1975.

Figure 1.

EVL faculty, students, and collaborators 1975.

Close modal

The Chemistry department had already developed a color video introductory course for freshmen that students could watch at their own pace before taking exams. These videos used broadcast-quality TV equipment to film live, close-up laboratory experiments, and used many computer-generated animated sequences. Chemistry wanted (1) to develop new courses using a similar format, and (2) to train Math and Chemistry graduate students in the Doctor of Arts program, whose research was on the communication of knowledge within a domain, to use Chemistry's computer and video media to produce its educational materials as part of their degree requirements.

Chemistry owned a Vector General display with analog input devices and a PDP 11/45 computer, similar to what DeFanti used as a graduate student at OSU to develop his Graphics Symbiosis System (GRASS) computer graphics language; GRASS had been developed for OSU art professor Charles Csuri, so was already shown to be a user-oriented, interactive, real-time system for non-technical people, though the hardware was expensive ($150,000). Just before arriving in Chicago, DeFanti met Dan Sandin, who had been hired four years earlier by the UIC School of Art & Design. They soon connected Chemistry's PDP-11/Vector General Display running GRASS to the Sandin Image Processor (IP) (see Figure 2), which Chemistry then replicated, to create real-time imagery on a monitor, manipulate it with an array of knobs and sliders, and then colorize it in real time to create artistic effects that were captured on video (DeFanti, 1976; EVL, 1973, 1974).
Figure 2.

Dan Sandin at IP.

Figure 2.

Dan Sandin at IP.

Close modal
In 1976, EVL developed the first data glove, an inexpensive, lightweight user-interaction device to monitor hand movements as input to their analog system, mimicking a set of sliders. Known as the Sayre Glove (see Figure 3), based on an idea from Rich Sayre, it used flexible tubes (not fiber optics) with a light source at one end and a photocell at the other (Sturman & Zeltzer, 1994).
Figure 3.

Sayre Glove.

In 1977, Tom DeFanti began consulting with Dave Nutting Associates/Midway, the video game division of Bally, to develop what evolved into the Z-GRASS language (GRASS recoded to run on the Z-80 powered Z-Box, a raster graphics machine with a custom GPU) distributed with what became the Datamax UV-1, an early personal computer that output 320 × 204 color graphics to NTSC video (DeFanti et al., 1978). The UV-1 was marketed as a titling and Computer Graphics Imaging (CGI) system for cable television, but mostly purchased and cherished by artists at UIC and Chicago's School of the Art Institute (SAIC), who used it to develop interactive applications through the 1980s. Z-GRASS later became RT/1 (Z-GRASS recoded in C to be machine independent), which ran on IBM PCs with 640 × 480 video boards.

In 1984, Chicago-based (art)n Laboratory, which pioneered a photographic method to produce PHSColograms—high-resolution, backlit, immersive 3D, barrier-strip autostereographic images displayed in lightboxes—began collaborating with EVL to create them digitally (art(n) & Sandor, 2022a). They believed that computer-generated autostereograms would increase quality and lower production costs, which it did (Sandin et al., 1989; Meyers et al., 1990). Sandin and DeFanti well understood how transformative user-centered perspective, enabled by autostereograms (and VR), was compared to the linear perspective perfected by Renaissance painters that is still used today. In fact, off-axis perspective was the primary contribution to subsequent development of the CAVEs. art(n) and EVL expanded this research for many years, had their works shown in many art and science shows (art(n) & Sandor, 2022b), and received several patents (see Section 7.2). The backlit barrier-strip autostereography technology used for PHSColograms became the foundation for EVL's future research in autostereoscopic VR displays (Section 4.1.1).

In 1987, EVL produced The Interactive Image exhibition at Chicago's Museum of Science and Industry using Datamax UV-1 systems (EVL, 2013a; Bravo et al., 2017). It took a year for EVL faculty, staff, students, alumni, and friends to develop approximately 12 real-time, interactive applications and user input controls that enabled museum visitors to explore mathematics and fractals, create animations, and produce personal artworks by image processing their video-captured faces. The National Center for Supercomputing Applications (NCSA) at the University of Illinois Urbana-Champaign developed an interactive installation that let visitors work with selected scientific images, such as an astrophysical jet stream or colliding neutron stars. Many installations from The Interactive Image were next exhibited at the SIGGRAPH 1988 Art Show and, subsequently, at The Computer Museum in Boston.

2.2 Community Engagement

GRASS and the Sandin IP were used in a series of live, interactive Electronic Visualization Events (EVE) held in 1975, 1976, and 1979 (with a later event in 1995). EVE real-time visual and musical jam sessions brought together computer scientists, artists, musicians, and video experts from UIC and SAIC. Spiral PTL (Perhaps the Last), a 1980 video recording of such a performance created by Sandin, DeFanti, and Mimi Shevitz, was included in New York's Museum of Modern Art Inaugural Collection of Video Art in 1983 (EVL, 2019a).

In 1976, EVL hosted artist Larry Cuba, who had asked DeFanti if he could use EVL's computer graphics hardware and software to create the briefing room scene for the 1977 Star Wars movie. Cuba wanted access to the GRASS system because (1) the cost of commercial computer graphics systems was prohibitive for many, and (2) he knew GRASS was originally developed for artists and easy to use. May 25, 2017, marked the 40th anniversary of the theatrical release of the movie and, to highlight Chicago's unique connection to the film, Chicago Tribune journalist Christopher Borrelli (2017) published an article and video titled “Blueprints for ‘Star Wars’ Death Star were created at UIC.”

EVL founders and their students were very active in the ACM SIGGRAPH professional society and its conferences. At SIGGRAPH's first conference, held in 1974, DeFanti, Sandin, and several other attendees informally showed their films and videos. At SIGGRAPH 1975, they took over a lunchroom in a dorm at University of Ohio Bowling Green and, though still informal, enabled more people to view attendee videos on a TV and 16mm films projected on a beige wall. By SIGGRAPH 1977, DeFanti received a proper budget for audio/visual staff and equipment, and the Film & Video Show became an official conference event, held in the hotel ballroom. By 1980, DeFanti started the SIGGRAPH Video Review (SVR), a video publication documenting the computer-graphics innovations taking place. The SVR remains among the world's largest collection of computer graphics videos.

DeFanti co-chaired the SIGGRAPH 1979 conference with Bruce McCormick, UIC CS professor and Head of the Electrical Engineering and Computer Science department, which was held in the Chicago suburb of Rosemont. DeFanti also served on the organization's Executive Committee as Secretary (1978–1981), Chair (1981–1985), and Past Chair (1985–1989).

Maxine Brown was also active in the SIGGRAPH organization and conference, which is how she met DeFanti and Sandin before joining EVL in 1986. She served on the SIGGRAPH Executive Committee as Secretary (1981–1985) and Vice Chair for Operations (1985–1987), and chaired the SIGGRAPH 1992 conference, held in Chicago.

In 1986, the National Science Foundation (NSF) invited DeFanti to serve on its Panel on Graphics, Image Processing and Workstations, created to advise NSF on whether to fund computer graphics and image processing hardware and software acquisitions by research institutions doing advanced scientific computing, with particular attention to NSF-funded supercomputer centers. The Panel's recommendations resulted in the landmark 1987 Visualization in Scientific Computing report and an accompanying videotape of state-of-the-art scientific visualizations, which SIGGRAPH published (McCormick et al., 1987).

3.1 VR/Visualization Systems and Applications

3.1.1 Hardware

In 1992, EVL introduced the CAVE (CAVE Automatic Virtual Environment), a multi-person, room-sized, walk-in, projection-based, high-resolution video and audio VR system at the SIGGRAPH 1992 conference (see Figures 4 and 5) (Cruz-Neira et al., 1992; Cruz-Neira, Sandin, et al., 1993; Cruz-Neira, Leigh, et al., 1993; DeFanti et al., 1993, 2011). The CAVE was a 10-foot cube, made up of three rear-projection screens for walls and a down-projection screen for the floor, and 10 people could stand comfortably inside. For SIGGRAPH 1992, a 30 × 20 × 13-foot light-tight enclosure was built, using mirrors to fold the projectors’ optics (see Figure 6); at EVL, the main lab where the CAVE resided was large enough to accommodate the display and mirrors, and to control the lighting.
Figure 4.

CAVE at SIGGRAPH 1992.

Figure 4.

CAVE at SIGGRAPH 1992.

Close modal
Figure 5.

Inside of the CAVE at SIGGRAPH 1992.

Figure 5.

Inside of the CAVE at SIGGRAPH 1992.

Close modal
Figure 6.

Illustration of CAVE Design.

Figure 6.

Illustration of CAVE Design.

Close modal
To accommodate long lines of SIGGRAPH attendees, EVL students who had worked with domain scientists during the year to develop CAVE applications served as expert navigators while the domain scientists explained what viewers were seeing. The navigator and viewers wore active-stereo glasses, and the navigator also wore a six-degrees-of-freedom location and orientation sensor called a tracker, so that when they moved within the CAVE, correct viewer-centered perspective and surround stereo projections were produced quickly enough to give a strong sense of 3D visual immersion. (see Figure 7). The viewers were like passengers in a bus, along for the ride! Later, at EVL, viewers were able to take turns navigating, too.
Figure 7.

An expert navigator in the CAVE.

Figure 7.

An expert navigator in the CAVE.

Close modal

The CAVE achieved national recognition as an excellent VR environment for computational science and engineering data. News media covering SIGGRAPH interviewed attendees as they exited the CAVE, and the response was overwhelmingly positive (EVL, 2008a; 2008b). By 1993, Larry Smarr at NCSA and Rick Stevens at Argonne National Laboratory (ANL) had built CAVEs on their campuses, as these institutions had researchers with pent-up demand to view and interact with large-scale 3D data simulations and visualizations. Also in 1993, EVL helped the US Department of Defense's Advanced Research Projects Agency (ARPA) build CAVE #4 to fit in their Star Trek-themed Enterprise room that showcased advanced technological innovations for VIP visitors.

General Motors Research immediately recognized that the CAVE would be a useful tool for vehicle design review, rapid prototyping, die and engineering analysis, and manufacturing validation, thereby eliminating the need to build life-sized models so products could come to market faster. They worked with EVL to build a CAVE while developing their own proprietary software, VisualEyes (Smith et al., 2000). The company Pyramid Systems, which was subsequently acquired by Fakespace and then Mechdyne Corporation, immediately licensed the CAVE to commercialize.

EVL staff member Greg Dawe designed and built EVL's CAVE (as well as a portable CAVE for SIGGRAPH 92) as well as its derivatives, notably the ImmersaDesk and PARIS (described below) and Varrier (Section 4.1.1). In the 2000s, Dawe joined DeFanti at Calit2-Qualcomm Institute, where he built room-sized CAVE-like displays; in particular: StarCAVE, a third-generation CAVE derivative with 360-degree screens surrounding the viewer (3 walls + door + floor), with each surface being super-high definition (4K × 4K); its successor SunCAVE, a new shape, new cluster computers, and high-resolution LCD screens; and WAVE, a curved wall array of 35 LCD monitors that end in a crest above the viewer's head and a trough at their feet (DeFanti et al., 2009, 2011; Calit2 & Ramsey, 2008; Calit2, 2013; Calit2 & Ismael, 2019).

Each CAVE generation yielded more polygons/textures in real time, more resolution per screen with improved GPUs, more screens and higher-resolution projectors, brighter screens with passive stereo TV monitors, as well as more speakers for audio, better tracking, and vastly improved software libraries. The CAVEcam stereo camera system (Section 4.1.3), developed in 2009, provided far better input image resolution capturing 360 × 180-degree composite images of approximately half-a-Gigapixel resolution. The SunCAVE was built to be the viewing device for CAVEcam images since the older CAVEs were not high-enough resolution.

In 1994, EVL developed a smaller, single-screen, portable, and more affordable VR display, the ImmersaDesk (Czernuszenko et al., 1997). The ImmersaDesk was designed like an upright piano, so somewhat fragile to ship as freight to conferences. In 1996, EVL developed the ImmersaDesk2 (see Figure 8), which fit within a large road case, making it easy to ship by air or truck, and easy for rapid deployment and accessibility (EVL, 1996). The UIC VR in Medicine group used an ImmersaDesk to create 3D volume visualizations of the Virtual Temporal Bone and the Virtual Pelvic Floor, which their doctors used for training (Rasmussen et al., 1998; EVL, 2007a).
Figure 8.

ImmersaDesk2.

One issue with projection-based displays was that they allowed occlusion of the stereo image by a user's hand or interaction device if the user was trying to interact with nearby virtual objects. PARIS (Personal Augmented Reality Immmersive System) (see Figure 9), designed in 1998, was similar in size to the ImmersaDesk but used a translucent rear-projection screen and a half-silvered mirror, creating a projection-based, augmented-reality desktop display where a user's hands were integrated with the virtual space and provided the primary means of interaction (Scharver et al., 2004). A user, sitting at the desk, worked with their hands below the mirror, enabling use of a keyboard and mouse on the desktop as well as the ability to manipulate virtual objects using a Phantom haptic device, glove, or wand (a specialized, 3D-printed, VR user-interaction device developed by Greg Dawe at EVL). PARIS was used by a UIC medical modeler and the Department of Neurosurgery to make custom-fitting cranial implants prior to surgery for patients with deformed skulls (EVL, 2007b).

3.1.2 Software

Initially, EVL wrote its own software libraries, so developers had a common platform on which to develop applications. These libraries included TrackD, to access tracking and user interaction (buttons, joystick) information, CAVELib (CAVE Library) to deal with the graphics and interaction, and various libraries to produce and control audio (EVL, 1991). In 1993, the CAVE Simulator greatly simplified application development (EVL, 1993; Pape, 1996). It ran on personal workstations, like the Silicon Graphics Inc. (SGI) Indy, enabling users to first develop applications remotely and then physically access the CAVE for final testing and demonstration.

Early VR application development was done in C and GL before moving to OpenGL, higher-level toolkits such as SGI Inventor and SGI Performer, and then leveraging external libraries such as the Visualization Tool Kit (VTK) in the latter half of the decade. The introduction of consumer graphics cards started the movement away from specialized graphics hardware to high-end PCs and PC clusters that would drive EVL's displays in the 2000s.

While high-level libraries like Open Inventor and Performer made application development easier, it was a high barrier for artists to develop CAVE applications. Higher-level libraries based on Performer, such as EVL's XP (EVL, 1998b; Pape et al., 1998) and Ygdrasil (EVL, 2006) enabled artists to create VR experiences, a development that would continue with libraries built on top of Unity in later decades.

In 1993, NCSA introduced Mosaic, the world's first visual/video web browser. To introduce Mosaic to US federal agencies, EVL packaged up a 300-MB file of SIGGRAPH 1992 Showcase content (Section 3.3) for a series of road shows in which NCSA director Larry Smarr, accompanied by EVL graduate student Joanna Mason, demonstrated Mosaic, contributing to its rapid adoption. EVL developed CAVEviewer, a version of the CAVE Simulator embedded in web pages, to provide full-motion video and to enhance documentation and dissemination of research projects via the Internet (Mason et al., 1994). An early use of Mosaic and CAVEviewer was for informational kiosks documenting the SIGGRAPH 1994 VROOM event, organized by EVL (Section 3.3).

As CAVEs proliferated worldwide, users wanted to (1) network CAVEs to remote supercomputers running simulations in order to access and visualize data, (2) remotely steer their simulations in real time, and (3) do CAVE-to-CAVE collaborations with remote participants working in the same virtual space. In the mid-1990s, network speeds were at best megabits per second (Mbps), so real-time interactive collaboration over distance was challenging.

This motivated CAVERNsoft, a networking library (Leigh et al., 1997), as well as avatar representations (polygonal, video, etc.) to enhance remote user communication (verbal and gestural). The ability to communicate through avatars gave users at one location a strong sense of the location of remote collaborators in the shared virtual space. EVL's interest in synchronous and asynchronous collaboration also motived the Lab's efforts in global network infrastructure (Section 3.2). Locating an ImmersaDesk near the CAVE helped EVL test networked collaborative worlds between these two systems before releasing code to others.

The V-Mail (Virtual Mail) application (see Figure 10) was developed for collaborations where time-zone differences prevented routine synchronous communication (Imai et al., 1999). V-Mail recorded a person's voice and gestures as they interacted with imagery in their CAVE. When the message played back in the receiver's CAVE, the sender's avatar materialized and re-enacted the message.
Figure 10.

V-Mail sender's avatar interacts with collaborator.

Figure 10.

V-Mail sender's avatar interacts with collaborator.

Close modal
Caterpillar (see Figure 11) and General Motors Research (see Figure 12) each partnered with EVL to use the CAVE for vehicle design review and modification, which provided excellent use cases and validation for networked, connected systems (Lehner & DeFanti, 1997; Smith et al., 2000). These and other shared experiences enhanced collaborations and enriched projects’ workflows (Johnson & Leigh, 2001).
Figure 11.

Caterpillar.

Figure 12.

General Motors Research.

Figure 12.

General Motors Research.

Close modal

In 1997, EVL became part of the National Computational Science Alliance, an NCSA partnership program funded by NSF. EVL focused on collaboration, specifically high-modality tele-immersive applications; that is, distributed VR sessions among VR devices, such as the CAVE and ImmersaDesk. Mechdyne had already licensed and commercialized both the CAVE and the ImmersaDesk2, and several Alliance partners purchased the latter.

During this time, Donna Cox, Bob Patterson, and Stuart Levy of NCSA's Advanced Visualization Laboratory (AVL), who were major CAVE users, worked with EVL graduate student Marcus Thiebaux to create Virtual Director, a tool that enabled artists and scientists to use VR collaboratively, either locally or remotely, to navigate a virtual camera through volumetric datasets and choreograph and script camera movements for playback on film or video (EVL, 1992; Thiebaux, 1997). They used Virtual Director to facilitate research with several domain scientists (Lascara et al., 1999), and to create animations for several IMAX movies, the first being Cosmic Voyage, and NOVA television programs.

UIC campus researchers started complaining that access to EVL's CAVE was difficult; if it wasn't being used for campus demos, then it was being used for software development and therefore unstable. In response, the UIC College of Engineering funded a second CAVE in an adjacent building. The original CAVE was then devoted to development, and the new CAVE became a production-ready system for demos and contract work. Keeping devices demo ready while doing ongoing development remains a challenge to this day.

3.2 Advanced Cyberinfrastructure: Computers and Networking

In 1992, EVL played a major role in the SIGGRAPH 1992 conference held at McCormick Place in Chicago. Maxine Brown was conference chair. DeFanti was responsible for Showcase, a demonstration of leading-edge visualization research and applications that relied on advanced computing and networking—the first time SIGGRAPH and McCormick Place were networked! It was as part of Showcase that the CAVE (see Section 3.1.1) was introduced. DeFanti partnered with ANL, NCSA, and Ameritech, which ran the Network Access Point (NAP) in Chicago, to organize the event (see Section 3.3).

In 1993, Joe Mambretti, who then worked for the University of Chicago (UC) but subsequently moved to Northwestern University (NU), worked with UC, UIC/EVL, NU, NCSA, ANL, Fermi National Accelerator Laboratory (FNAL), and Ameritech to plan MREN (Metropolitan Research & Education Network), a persistent regional network interconnecting key research institutions in the greater Chicagoland area. It was implemented in 1994 with support from an NSF grant. MREN grew quickly and today links research institutions from seven upper-Midwest states (MREN, 2015). MREN, then located at the Ameritech NAP, interconnected with other regional networks via NSFnet, the NSF-funded National Research & Education Network (NREN), which was replaced in 1995 by NSF's very High-Speed Backbone Network Service (vBNS), the precursor to Internet2.

The vBNS, as well as other nascent federal NRENs from the US Department of Energy and the US Department of Defense, were used for the I-WAY (Information Wide Area Year) event at the IEEE/ACM Supercomputing (SC) 1995 conference, organized by ANL, NCSA, and EVL (see Section 3.3). I-WAY participants could leave their supercomputers, instrumentation, and data stores at home and remotely access them to move the results of large-scale simulations to the conference for display on visualization systems.

NSF took note of I-WAY, as their funding agency counterparts in foreign countries were also working with their NRENs to build advanced networks. These agencies wanted to interconnect their NRENs for the benefit of their respective researchers, providing access to computers, data stores, instrumentation, and collaborators globally. NSF encouraged EVL's involvement, liking the idea of visualization people managing and using networks for interactive visualization and/or streaming. If the networks weren't working properly, everyone would have immediately known, as the resulting imagery would have had jitter, lag, or dropout.

In 1997, to enable network traffic among US and international research institutions, NSF funded EVL to work with NU, ANL, NCSA, and Ameritech to establish STAR TAP (Science, Technology And Research Transit Access Point) in Chicago, a global Network Access Point (NAP) for long-term interconnection and interoperability of international NRENs (STAR TAP, 1997; NSF, 2003). STAR TAP leveraged the Ameritech NAP infrastructure where MREN and vBNS connected.

Next, to facilitate the interconnection and interoperability of NSF's vBNS with international NRENs for the benefit of US researchers, NSF funded international circuits. NSF awarded grants to several US researchers to pay for these links. In 1999, EVL received funding for Euro-Link and worked with a consortium of European and Middle East countries to co-fund circuits between the United States and network hubs in the Nordic countries (Denmark, Finland, Iceland, Norway and Sweden), France, the Netherlands, and Israel (STAR TAP, 1999). At that time, other universities (Florida International University and Indiana University) also received NSF awards to provide circuits to South America and Asia, respectively.

By 2001, as network speeds grew from 155–622 Mbps on SONET (Synchronous Optical Network) to 1–10 Gbps over optical channels, EVL worked with Joe Mambretti of NU and Linda Winkler of ANL to create the StarLight International/National Communications Exchange Facility to prototype a 10-Gbps Ethernet-over-optical fiber path infrastructure (STAR TAP, 2001). They used advanced technologies, including optical transport systems from telecommunications equipment company Nortel in Canada that replaced ATM (Asynchronous Transfer Mode), a network transfer mode to switch and transmit data, with GigaMAN (Gigabit Metro Area Network) and GigaMON (Gigabit Metro Optical Network) Ethernet services, with state-wide, regional, national, and international extensions. Given the research community's demand for faster networks to move big data, StarLight became a production facility within a few months!

EVL's interest in advanced networking technologies and techniques led to a long-term collaboration with NTT (Nippon Telegraph and Telephone) Networking Innovations Laboratories in Japan. The N*VECTOR and then ON*VECTOR (Optical Networked Virtual Environments for Collaborative Trans-Oceanic Research) research projects (1997–2019) focused on the utilization, scheduling, scalability, speeds, and reliability of emerging high-performance networks; EVL's SAGE software (see Section 4.2) was eventually used to stream ultra-high-definition visualizations and animations (4K, 8K, and more). In addition to NTT and EVL, other partners included Calit2-Qualcomm Institute, Keio University, and University of Tokyo, and was managed by Pacific Interface, Inc. (See Section 4.4 for how this collaboration led to the creation of the CineGrid organization.)

3.3 Community Engagement

The CAVE premiered at SIGGRAPH 1992 as part of the Showcase event, organized by DeFanti, EVL, and partners NCSA, ANL, and Mesa Graphics (Section 3.2) (EVL, 2022a). Showcase uniquely demonstrated science in action with over 45 projects using either the CAVE or workstations networked to local and remote supercomputers, instrumentation, and data stores. Given the CAVE's impact, organizers of the SC 1992 and 1993 conferences invited EVL to introduce the CAVE to their attendees as well.

EVL has continued to both organize events and participate in events at SIGGRAPH and SC as well as at other conferences and GLIF workshops (see Sections 4.3 and 4.4). These venues presented opportunities to promote EVL's new technologies and achievements to the research communities with whom EVL was involved (VR, scientific visualization, advanced networking), and to attract both future collaborators and future users of its inventions.

In 1994, EVL organized the SIGGRAPH 1994 VROOM (Virtual Reality Room) event (EVL, 2004a). VROOM showcased over 40 VR projects using three CAVEs and several BOOMs (Binocular Omni-Orientation Monitors) (Wikipedia, 2019), involving more than 200 researchers. Unfortunately, given that Orlando's convention center was under construction, it was not possible to network offsite, so EVL had to arrange transport to bring data stores, workstations, small supercomputers, and scientific instrumentation onsite.

The goal of the SC 1995 I-WAY event, mentioned in Section 3.2, was to leave the supercomputers at home, access data over advanced networks, and visualize results at the conference, in CAVEs, on ImmersaDesks, and on a large-screen, projection-based PowerWall (DeFanti et al., 1996; EVL, 2022b). I-WAY demonstrations interconnected 12 network testbeds, 17 supercomputer centers, five VR research sites, and over 60 application groups to create an extremely diverse, wide-area environment in which to explore advanced applications and the future of advanced scientific and engineering computing.

In 1998, after NSF funded STAR TAP and international network circuits, NSF asked EVL if we could identify who was using the networks and for what purposes. For EVL, the best way to find out was to organize an event at a major conference and invite real-time demonstrations. In collaboration with Indiana University, EVL organized the first iGrid (International Grid) event, then a small research booth at the SC 1998 conference that showcased global community networking and visualization applications (EVL, 1998a; Brown, 1998; Brown et al., 1999). One EVL demonstration used CAVERNsoft (Section 3.1.2) to connect VR displays in Singapore, Australia, Japan, and the United States (Leigh, 1998).

In 1998, EVL began working with Bryan Carter of Central Missouri State University, the University of Missouri, University of Arizona, and UIC's English and History departments, to create Virtual Harlem, a VR collaborative learning environment representing Harlem, New York, as it existed in the 1920s and 1930s. Carter conceived and initiated the project to complement undergraduate Harlem Renaissance courses, enabling researchers and students to study this important period in African American history (Park et al., 2001). EVL included lessons learned from its V-Mail application (see Section 3.1.2) and recorded experts, like Carter, talking and gesturing in Virtual Harlem, to engage viewers in the experience even when the experts were not physically present. Carter received an award from the MOBE (Marketing Opportunities for Black Entrepreneurs) group at their annual conference in 2000 for Virtual Harlem.

4.1 VR and Visualization Systems and Applications

4.1.1 Autostereo Hardware, Software, and Applications

While the CAVE gave users a unique immersive experience, there were clear technological limitations. Users were encumbered by wearing active stereo glasses. Room lighting was dimmed to better see the projected imagery, but this made it more difficult for collaborators to interact with one another and take notes. And, while users could be immersed in their data, the intensity and resolution of the early projectors and environmental lighting constraints impacted the ability to do quality visual science. It was like working by moonlight, although the color got much better with brighter stereo-capable projectors over the next 15 years.

With flat panel displays becoming higher resolution, larger, brighter, and more affordable, EVL began developing Varrier, a barrier-strip autostereoscopic display system that produced VR immersive experiences; users did not have to wear stereo glasses, but they wore a head band to track head movements. Varrier (see Figures 13 and 14), developed in 2003, was a 35-panel (7 × 5) cylindrical configuration display and Personal Varrier (see Figure 15), developed in 2005, was a single LCD display featuring a camera-based face recognition and tracking system so that users did not wear any encumbrances (Sandin et al., 2005; Peterka et al., 2007a). A vertical parallax barrier, or line screen, which consisted of alternating clear and opaque lines, was affixed to the front of displays and head tracking was used to adjust the imagery as the user moved in front of the displays, enabling viewers to see computer-generated, autostereoscopic images. Varrier research, which resulted in a patent (see Section 7.2), became critical to CAVE2 development, which started in 2009.
Figure 13.

Varrier.

Figure 14.

Varrier (close-up).

Figure 14.

Varrier (close-up).

Close modal
Figure 15.

Personal Varrier.

Figure 15.

Personal Varrier.

Close modal

4.1.2 Visualization and VR Hardware, Software, and Applications

EVL participated in ANL's Access Grid project—distributed, group-to-group, room-to-room collaborations over advanced networks, similar in concept to today's Zoom collaborations (Taylor, 2007). An outgrowth of working on the Access Grid was EVL's development of the AccessBot (see Figure 16), an integrated teleconferencing system with life-sized people's faces on large, vertically oriented display screens, robotics that enabled users to turn a camera mounted on top of the screen so that they could see people in the remote room location, and high-speed networking (EVL, 2000). Initially designed to support people with disabilities participating in remote meetings, it provided high-fidelity video and audio that ensured equal presence, which is important for any video-teleconferencing experience.
Figure 16.

AccessBot.

EVL started working with the geoscience community, in particular University of Minnesota and University of Michigan, to build a low-cost, non-tracked, passive-stereo system so audiences anywhere could easily view stereo content. While such a low-cost system was applicable to many applications, the geoscience community immediately saw its potential to support Earth Science curricula and geological fieldwork (to document 3D rock formations), hence the name GeoWall (Johnson et al., 2006). By 2001, a $10,000 system (two “Powerpoint” projectors, passive stereo filters, and a PC) was used in classrooms by 33% of US Earth Science undergraduate students. Based on this success, the geoscience community next asked EVL for help viewing high-resolution images.

Leveraging increasingly powerful PCs, flat-panel LCD (Liquid Crystal Display) displays, and graphics cards, in 2002 EVL built the cluster-driven, 3×5, 15-Megapixel GeoWall2 (see Figure 17) that used a mouse for input (EVL, 2002). EVL worked with the US Geological Survey (USGS) on the design of GeoWall2, given that they were tasked with creating high-resolution aerial photography of major US cities in the aftermath of September 11, 2001, but didn't have any large-scale, high-resolution displays on which to view them!
Figure 17.

GeoWall2.

In talking with geoscientists, it was discovered that most spent their days studying maps while sitting, not standing, which was inspiration for LambdaTable (see Figure 18), a 24-megapixel table-format display with input pucks tracked by overhead cameras built in 2004 (Krumbholz et al., 2005). EVL wrote the application RainTable for LambdaTable that was used in the traveling museum show Water: H2O = Life, organized by the Science Museum of Minnesota (SMM), University of Minnesota, and American Museum of Natural History (Svistula, 2008). RainTable enabled multiple users to generate virtual rainfall on top of topographical and aerial maps and learn about water flow over different terrains.
Figure 18.

LambdaTable running RainTable.

Figure 18.

LambdaTable running RainTable.

Close modal
EVL's involvement with the geoscience community is what sparked the term Immersive Empathic Design (see Section 1.1), as EVL PhD student Yu-Chung (Julian) Chen spent a summer in Minnesota observing how geoscientists studied lake and ocean sediment cores to better understand past periods of global warming and cooling (Chen, 2011; EVL, 2011b). In collaboration with the US National Lacustrine Core Repository (LacCore) at the University of Minnesota and the Integrated Ocean Drilling Program (IODP) through the Joint Oceanographic Institutions, EVL developed the CoreWall desktop environment using a Personal GeoWall2, a system with a single computer driving up to four LCD panels that could display section images from one or more cores along with discrete data streams and nested images, and could interface to databases of core images and sensor logs to fetch data from remote CoreWall repositories (see Figure 19) (Davis, 2004; EVL, 2004d). CoreWall was in reaction to how geoscientists currently worked—with paper printouts, handwritten notes, and core photographs—as shown in a photo of collaborator Franco Talarico (see Figure 20) of the international ANDRILL (ANtarctic DRILLing) project. Talarico studied cores to identify and classify clasts (rock sediment, silt, sand, and gravel fragments) and charted them graphically on paper logs. CoreWall was widely adopted by geological core drilling expeditions around the world, including the National Lacustrine Core Facility, the British Ocean Sediment Core Research Facility, the Antarctica geological drilling project, and the Integrated Ocean Drilling Program's JOIDES Resolution scientific drilling vessel.
Figure 19.

Chen in front of EVL's CoreWall system.

Figure 19.

Chen in front of EVL's CoreWall system.

Close modal
Figure 20.

Talarico and his clast drawing sheets.

Figure 20.

Talarico and his clast drawing sheets.

Close modal
LambdaVision, built in 2004, was a 17-foot wide, 105-megapixel display built using 55 LCD panels and driven by a 30-node PC cluster to help researchers effectively use a room-sized digital wall to explore ultra-high-resolution imagery. USGS was an early user, as previously mentioned, using it to display aerial photography of major cities (see Figure 21) (EVL, 2004b). As EVL's SAGE software began to mature (see Section 4.2), LambdaVision was used to display multiple windows with high-resolution information (see Figure 22).
Figure 21.

LambdaVision displaying USGS's high-resolution map of Chicago.

Figure 21.

LambdaVision displaying USGS's high-resolution map of Chicago.

Close modal
Figure 22.

LambdaVision and SAGE.

Figure 22.

LambdaVision and SAGE.

Close modal

To make space for LambdaVision in 2004, EVL dismantled the original CAVE in its main lab. UIC's second CAVE continued to be used until decommissioned in 2006 to make way for a motion capture studio.

In late 2008, EVL integrated the latest new technologies—larger, higher-resolution, commodity, flat panels, and IR tracking—to create the multi-user, multi-touch systems TacTile (Kahler et al., 2009) and OmegaDesk (Febretti et al., 2011). TacTile was like a big smart phone but multi-touch. OmegaDesk used motion sensor technology and multi-touch to enable users to seamlessly interact and view hybrid 2D and 3D content.

Both were used with a broad range of applications. For example, RainTable was ported to TacTile and initially shown at several major conferences; it was big hit among children during Family Days at the 2009 American Association for the Advancement of Science (AAAS) conference (see Figure 23)! SMM adopted it for its traveling museum shows and several research institutions built and used them. EVL faculty used it to teach the Video Game Design class, and the students found it challenging to design new user interfaces for multi-player games around a small table.
Figure 23.

TacTile.

Starting in 2009, OmegaDesk was used by other UIC departments working on a project funded by the U.S. Air Force to study the histology, or microscopic structure, of skin tissue to determine the extent of damage resulting from the use of foreign-made, high-powered, directed-energy medical devices. EVL developed a Histology Viewer for OmegaDesk to enable users to see both 2D cross-sections and 3D volumetric reconstructions at high resolution, and to use multi-touch interaction and hand gestures to manipulate the data (see Figure 24) (Bassiony et al., 2014).
Figure 24.

OmegaDesk.

OmegaLib, a C++/Python middleware library, was initially developed to support OmegaDesk (Febretti et al., 2014). It was expanded to run all EVL's visualization systems, replacing CAVELib as EVL's standard development library, and was interfaced to SAGE.

Initially, EVL considered building OmegaDesk using Dynallax (see Figure 25), an autostereo project EVL had started investigating almost 10 years prior and decided to continue (Peterka et al., 2007b; Peterka, 2007; Chau et al., 2012). Dynallax could simultaneously display autostereoscopic 3D images and 2D images, enabling users to see 2D and 3D information without having to wear special glasses, which nascent commercial systems could not do. A drawback, however, was that the brightness was significantly diminished, so EVL did not pursue it.
Figure 25.

Dynallax.

Around this same time, EVL was inspired by University of Michigan's work on Project Rooms (Covi et al., 1998; Olson et al., 1998; Teasley et al., 2000) to create a digital version of a Project Room with modern assets that would be as easy to use as physical flip charts and Post-It notes. This led to the development of the Cyber-Commons room, EVL's term for a technology-enhanced room that could support local and distance collaboration and group-oriented problem solving (Leigh & Brown, 2008). The Cyber-Commons room contained LambdaVision2 (see Figure 26), a second-generation LambdaVision that was a 20 × 6-foot, 6 × 3-tiled display wall running SAGE that used NEC displays whose inter-tile borders were 7 mm thick when tiled edge-to-edge within the framing, virtually eliminating the windowpane effect of LambdaVision's noticeable 35-mm tiled borders. In 2010, EVL installed the largest touch screen built to date on the wall. It was connected at 20 Gbps to the campus research network to support high-resolution simulations and visualizations. Cyber-Commons was regularly used for meetings and classes, making advanced technology accessible to large student populations and enabling EVL to better understand the role of high-performance and ubiquitous computing in future classrooms.
Figure 26.

Cyber-Commons and LambdaVision2.

Figure 26.

Cyber-Commons and LambdaVision2.

Close modal
Receiving an NSF award in 2009 to build CAVE2, EVL started working with Planar Systems, a company that sold 2D LCD displays, on the design of an ultra-thin-border, flat-panel 3D display. This collaboration proved mutually beneficial, as EVL eventually had technology that surpassed what was commercially available, and Planar brought a new product, the Clarity Matrix, to market (Planar, 2012). By 2012, EVL reconfigured Cyber-Commons to install Planar's prototype 3D displays (see Figure 27) to test how they would look in CAVE2. EVL also modified SAGE to handle 2D as well as 3D windows.
Figure 27.

Cyber-Commons with Planar displays.

Figure 27.

Cyber-Commons with Planar displays.

Close modal

In 2006, EVL received NSF funding for Project Lifelike, a joint project between EVL and the University of Central Florida (UCF) to develop and evaluate lifelike, natural computer interfaces as portals to intelligent systems (Gonzalez et al., 2013). An NSF program officer was retiring and asked to document the details of a program he had created decades earlier; however, rather than have him write a manual, NSF wanted a lifelike avatar of him to interact with users and answer questions on specific topics in spoken natural language. Using lessons learned from AccessBot, described earlier, avatars were designed for large screens, so were life-size as well as lifelike.

UCF focused on natural language processing and EVL focused on creating a realistic, interactive avatar and system evaluation. Project Lifelike was featured on Discovery Science channel in “Future of: Immortal Avatars” in 2009 and NOVA's scienceNOW (NOVA, 2011). In 2014, Project Lifelike was used to create an avatar of Alan Turing for a museum show (see Section 5.4).

4.1.3 3D Camera

In 2009, DeFanti, who was then at Calit2-Qualcomm Institute, and Sandin, at EVL, collaborated with Richard Ainsworth to develop CAVEcam (see Figure 44), an automated, spherical, still photography camera system that created panoramic 3D immersive experiences for viewing in CAVE-like displays (Ainsworth et al., 2011). Researchers wanted to bring high-resolution 3D images of the real world into VR, not just synthetic worlds, just as geoscientists needed the GeoWall stereo projection system to best teach students about rock formations. CAVEcam took 360 × 180-degree panoramas, 72 pictures per eye, with each image 10 megapixels in resolution, which were then assembled into 500 megapixel-per-eye stereo images using a stitching program called PTGui Pro.

4.2 SAGE Collaboration and Visualization Software

From 2002–2009, NSF funded Calit2-Qualcomm Institute and EVL to lead the OptIPuter project, so named for its use of OPTical networking, Internet Protocol, and compUTER technologies (Smarr et al., 2003, 2009; Pieper et al., 2009). Larry Smarr, principal investigator, was previously NCSA director and an EVL partner, and when he moved to UC San Diego in 2000, he founded and became director of Calit2 (now Calit2-Qualcomm Institute) and continued EVL collaborations.

OptIPuter envisioned how researchers would do computing in a world with unlimited network bandwidth. The OptIPuter proposed multi-Gbps networks as the backplane of a planetary-scale computer whose distributed components—compute clusters, data storage systems, scientific instruments, visualization systems—would enable scientists to compute, access, and view ultra-high-resolution visualizations on large-scale display walls. Such data superhighways were needed because the shared Internet and World Wide Web were engineered to handle megabyte-sized objects, whereas scientific instruments generated gigabyte- to terabyte-sized datasets. OptIPuter aimed to make interactive access to remote gigabyte visualization data objects as easy as the Web made access to remote lower-resolution images.

By 2004, after observing how OptIPuter's geoscience researchers from the UCSD Scripps Institution of Oceanography (SIO) and neuroscience researchers from the UCSD National Center for Microscopy and Imaging Research (NCMIR) worked, EVL began developing SAGE (Scalable Adaptive Graphics Environment) as the OptIPuter's visualization middleware (Renambot et al., 2009, 2016).

The OptIPuter's distributed network testbed was part of an evolving 10-Gbps regional, national, and international Research & Education Network infrastructure, which EVL helped architect in the 1990s with STAR TAP and then with StarLight (see Section 3.2). LambdaVision2 had a 10-Gbps connection from EVL to StarLight, where it interconnected with this evolving global infrastructure, and since SAGE was being developed on LambdaVision2, SAGE was designed to scale up rendering capacity and was a prime candidate for aggressively evaluating high-performance networks.

The SAGE Visualcasting service enabled multi-point distance collaboration, whereby high-resolution visualizations and high-definition video streams (visual and audio) were replicated and sent to multiple sites, each with variable-sized tiled displays (Jeong et al., 2010). Visualcasting enabled researchers to simultaneously communicate with each other and share visualizations and videos, and was an important advancement over traditional network multicasting that required network engineers to implement on a case-by-case basis.

SAGE Visualcasting was used by the EVL/NTT Network Innovations Laboratories collaboration to stream ultra-high-definition visualizations and animations (4K, 8K, and more) to scalable tiled displays (see Section 3.2).

Sharp Laboratories of America, Inc., partnered with EVL to learn more about large display innovations and how scientists wanted to use them. To demonstrate Visualcasting, EVL and Sharp entered the SC 2008 Bandwidth Challenge competition (EVL, 2008c). Their demo, “Global Visualcasting—Collaboration in Ultra-Resolution Work Environments,” streamed 4K and full-high-definition video, audio, and visualizations among three booths on the exhibition floor, two Midwestern universities, and several research institutes in Korea, Japan, Australia, Russia, and the Czech Republic, to create a sustained global teleconference. Participating partners already had persistent collaboration spaces, consisting of network-connected tiled display walls running SAGE. Sharp provided a prototype 4K (4096 × 2160), 64-inch, LCD display, which EVL used to run SAGE. The Bandwidth Challenge, a major annual forum showcasing leading-edge, international, networked applications, selected “Global Visualcasting” as one of the year's finalists (see Figure 28).
Figure 28.

Sharp Labs’ Sachin Deshpande and EVL's Luc Renambot holding the SC 2008 Bandwidth Challenge award in front of Sharp's 4K display.

Figure 28.

Sharp Labs’ Sachin Deshpande and EVL's Luc Renambot holding the SC 2008 Bandwidth Challenge award in front of Sharp's 4K display.

Close modal
The more EVL interacted with OptIPuter's geoscience and neuroscience researchers, the more capabilities were added to SAGE. While they needed large displays to view ultra-high-resolution images, such as earthquake images (see Figure 29) and brain electron microscope images (see Figure 30), some images were too big to fit on a tiled display, so they needed to be able to dynamically pan and zoom through them. They also wanted to open windows with related movies, images and/or text for context; to have multiple users simultaneously interact with those windows using their laptops; to share content to the display; and to move pointers on the screen to focus attention.
Figure 29.

OptIPuter: SIO researchers using SAGE.

Figure 29.

OptIPuter: SIO researchers using SAGE.

Close modal
Figure 30.

OptIPuter: NCMIR researchers using SAGE.

Figure 30.

OptIPuter: NCMIR researchers using SAGE.

Close modal

Since its inception, SAGE continues to evolve and controls a variety of high-resolution displays at hundreds of sites around the world. SAGE developers and users regularly use it for meetings and classes, and their feedback helps harden the software and create new features.

4.3 Advanced Cyberinfrastructure: Computers and Networking

By 2004, Euro-Link (Section 3.2) grew from a 45-Mbps circuit to multiple 10-Gbps links. To continue this effort, EVL received NSF funding for TransLight in 2004 (DeFanti et al., 2003; STAR TAP, 2004), where StarLight and NetherLight (an open network exchange, like StarLight, hosted by SURFnet, the Dutch Research & Education Network) would groom aggregate traffic in North America and Europe, respectively, and then provide a network across the Atlantic Ocean connecting the two locations. Again, other universities also received NSF funding to connect the United States to South America and Asia.

In 2010, EVL received the NSF TransLight/StarLight award to focus on communication services and advanced applications, and in 2015, partnered with NU on StarLight Software Defined Networking Exchange (SDX) to enhance StarLight with advanced, diverse, reliable, persistent, and secure networking services (STAR TAP, 2015).

From 2004–2009, EVL paid for CAVEwave (see Figure 31), a 3,200-mile, 10-Gbps wavelength inside National LambdaRail (NLR), an advanced US research and education network (EVL, 2004c). CAVEwave was initially used to support OptIPuter application development and networking experiments. CAVEwave supplemented the existing academic best effort shared network by providing researchers with a deterministic network with guaranteed bandwidth, schedulable times, and known latency characteristics. CAVEwave initially connected Chicago to Seattle to southern California and was subsequently extended to the East Coast.
Figure 31.

CAVEwave on the NLR Footprint.

Figure 31.

CAVEwave on the NLR Footprint.

Close modal
Also of importance was that, beginning in 2001, GLIF (Global Lambda Integrated Facility) (see Figure 32) was founded, and EVL was a founding participant (GLIF, 2019). GLIF was a virtual international organization and did not own any resources itself. Its participants, consisting of international network professionals, network engineers, computer scientists, and computational scientists, agreed to share their resources, thereby building a worldwide facility of interconnected, interoperable optical wavelengths (also known as lambdas) connected to computers, data storage, visualization systems, and instruments at participating sites around the globe, like OptIPuter, but on a global scale (GLIF, 2011). After almost 20 years, GLIF disbanded, but NU, along with Calit2-Qualcomm Institute and EVL, absorbed its user-facing activities and renamed it the Global Research Platform (GRP, 2019).
Figure 32.

GLIF map 2017.

Figure 32.

GLIF map 2017.

Close modal

4.4 Community Engagement

iGrid events (see Section 3.3) started in 1998 and continued in 2000, 2002, and 2005, as community-driven biennial events coordinated with the GLIF organization (see Section 4.3) to accelerate the use of multiple 10-Gbps international and national networks, to advance scientific research, and to educate decision makers, academics, and industry researchers on the benefits of hybrid networks (Brown, 1998, 2000, 2002, 2005). iGrid events all featured real-time application demonstrations from numerous international participants, as well as symposiums, panels, and master classes on applications, middleware, and underlying cyberinfrastructure.

One iGrid 2005 demo, “International Real-time Streaming of 4K Digital Cinema” (Shimizu et al., 2006), was so successful that the organizers wanted to continue and expand their collaboration with other interested parties, including networking and visualization researchers as well as movie production companies. This led to the creation of the CineGrid organization, a global effort to support research and education for 21st-century filmmaking by applying advanced networks, middleware, and application tools to the production, distribution, archiving, and display of new digital media (Smarr et al., 2007). CineGrid, a nonprofit organization, was created by Pacific Interface, Inc., in collaboration with Calit2-Qualcomm Institute, EVL, NTT Network Innovations Laboratories, Keio University, and the Digital Cinema Technology Forum and Digital Cinema Consortium of Japan.

In 2007, astronomers at Chicago's Adler Planetarium worked with EVL to create interactive exhibits of flyovers of Earth's moon and Mars, which led to the inception of Adler's Space Visualization Laboratory. This working laboratory, staffed with astronomers, technologists, artists, and educators, gave museum visitors a firsthand look at new interactive and immersive visualizations and an opportunity to hear presentations featuring these visualizations. With EVL's help, a similar space was built at the ‘Imiloa Astronomy Center in Hawaii in 2016.

In 2009, Adler Planetarium worked with EVL to build Moon Wall, a 4 × 4 tiled-display wall, to display recent images from NASA's Lunar Reconnaissance Orbiter mission to the moon. Visitors could virtually fly over the surface of the moon using a joystick. It also showed EVL's 3D rendering of Mars, developed with data from NASA and the European Space Agency.

5.1 VR and Visualization Systems and Applications

5.1.1 CAVE2 and Applications

The CAVE2 Hybrid Reality Environment was introduced in October 2012. Lessons learned from developing the CAVE, LambdaVision, Cyber-Commons, and Varrier technologies were incorporated in the development of CAVE2 (Febretti et al., 2013; Leigh et al., 2013; Reda et al., 2013), a room-scale system that integrated 3D immersive simulation and 2D information analysis. To make room for CAVE2 in EVL's main lab, LambdaVision was dismantled and given to the UIC ACM Computer Science student chapter for use in their space.

CAVE2, with its 22-foot diameter and circular (320-degree) design, initially used 72 passive stereo displays (18 columns × 4 rows) driven by a 36-node computer cluster. It could be run as a 37-megapixel 3D display or a 74-megapixel 2D display, making it a hybrid device that could easily adapt based on usage. In 3D, CAVE2 used passive stereo with camera tracking, not the expensive, bulkier, active-stereo glasses and tethered magnetic tracking used in the CAVE system. The familiar passive stereo glasses used by movie theaters for 3D theatrical releases were a welcome shift.

The two middle rows consisted of Planar's Clarity Matrix ultra-thin-border 3D LCD displays, which EVL partnered with Planar to design and develop in 2009 (see Section 4.1.2). The top and bottom rows were custom-designed displays, also co-designed by EVL and Planar, in which micropolarizer array filters were offset to accommodate off-axis viewing, as illustrated in Figures 33 and 34, ensuring a finely tuned CAVE2 stereo experience.
Figure 33.

Normal polarizer registration.

Figure 33.

Normal polarizer registration.

Close modal
Figure 34.

Shifted (−/+) polarizer registration for top and bottom rows.

Figure 34.

Shifted (−/+) polarizer registration for top and bottom rows.

Close modal

A recessed back wall was later added, increasing the number of displays to 86, to give the illusion of a complete 360-degree surround environment or to act as a second display surface for ancillary content. Given hardware acceleration, the number of computers was reduced to eight.

New software was written to drive the CAVE2 in 3D and 2D modes. For 2D work, EVL used SAGE2 (see Section 5.2). For 3D work, new libraries were created: Electro, written in Lua (EVL, 2004e), and OmegaLib, written in C++/Python (Febretti et al., 2014). These scriptable, multi-platform tools enabled users to quickly get their data and applications into CAVE2. Additional CAVE2 libraries were developed to control graphics, sound, user inputs, and tracking.

By the middle of the decade, as game engines such as Unity and Unreal Engine became more powerful and more ubiquitous, EVL moved to the Unity platform. The CAVE2 Unity Library and the Omicron plug-in (Nishimoto, 2020) let students write applications and deploy them to CAVE2, head-mounted displays (HMDs), laptops, and/or smart phones. A Unity CAVE2 Simulator, along with a set of simple demo examples, let users create and test CAVE2 applications without needing access to the CAVE2 system. These libraries became extensively used by CS and Design students enrolled in the Creative Coding class to create fully functional CAVE2 experiences within a single semester without any prior Unity experience.

CAVE2 was designed to support research groups (faculty, students, collaborators) working for long periods throughout the day within the space. A great deal of care was taken to make the space comfortable with controlled lighting and sound, and to make the space reconfigurable with portable tables and chairs. CAVE2 continues to be regularly used in 3D VR mode, 2D project room mode, or 3D/2D hybrid mode, for research and class work.

Examples of CAVE2 scientific applications include ENDURANCE, Dark Sky, and Connectome (Marai et al., 2016). The NASA-funded ENDURANCE (Environmentally Non-Disturbing Under-ice Robotic ANtarctiC Explorer) (see Figure 35) project used VR to further biological research in terrestrial environments that are similar to those found on other planets, with the goal of applying lessons learned to planetary science. Specifically, NASA was interested in capturing and analyzing information about the water ice surface of Jupiter's moon, Europa, by first gathering information about West Lake Bonney, a perennially ice-covered lake in the McMurdo Dry Valleys of Antarctica. CAVE2 was used to clean up and view sonar data of the bathymetry of the lake, and to identify locations where 3D biogeochemical information was captured by an autonomous underwater vehicle. ENDURANCE was a collaboration of EVL, UIC Earth and Environmental Sciences, Stone Aerospace, NASA Ames Research Center, and Montana State University.
Figure 35.

ENDURANCE data in CAVE2.

Figure 35.

ENDURANCE data in CAVE2.

Close modal
Dark Sky (see Figure 36) used VR to study large-scale cosmological simulations of dark matter formations. The data and required tasks were provided by the Dark Sky project hosted by Stanford University. UIC CS undergraduates and graduate students developed a CAVE2 immersive visual mining and analysis tool to help astronomers interactively navigate and mine large-scale cosmological simulation data, which was tested by astronomers from Adler Planetarium (Hanula et al., 2015).
Figure 36.

Dark Sky in CAVE2.

Figure 36.

Dark Sky in CAVE2.

Close modal
A Connectome (see Figure 37) is a map of neural connections in the brain. CAVE2 participants could walk inside streamlines depicting the white matter fiber tracts in the brain, derived from diffusion tensor images (DTI) obtained in an MRI scan. The fiber tracts were color-coded by their primary direction (green: front-back, red: left-right, blue: up-down). EVL collaborated with the UIC Department of Psychiatry, as they needed large stereo displays to see the details of their 3D data. Their goal was to compare patients’ connections over time as they underwent treatment.
Figure 37.

Connectome in CAVE2.

Figure 37.

Connectome in CAVE2.

Close modal
In 2010, Sandin and collaborators created the artwork Particle Dreams in Spherical Harmonics (see Figure 38), his latest in a series of VR art installations that let the viewer/participant interactively create immersive, visual, and sonic experiences. It is based on the physical simulation of over one million particles with momentum and elastic reflection in an environment with gravity. This work premiered in January 2011 in Calit2-Qualcomm Institute's StarCAVE as part of the gallery@calit2 exhibition and was subsequently shown at EVL (EVL, 2011a).
Figure 38.

Particle Dreams in the CAVE2.

Figure 38.

Particle Dreams in the CAVE2.

Close modal

5.1.2 Continuum

In 2016, the success of CAVE2 as a highly mediated meeting room led to the reinvention and renovation of Cyber-Commons, renamed Continuum (see Figure 39), with similar, upgraded technologies. In 2019, Continuum was upgraded to be a highly instrumented, sensor-enriched smart space, consisting of a large, touch-enabled, 24-foot wide, 37-megapixel 2D screen and a separate 16-foot wide, 17-megapixel passive stereo tracked 3D screen (33 megapixels in 2D). Continuum has ceiling-mounted microphones, cameras, lighting control, and optical tracking throughout the room's footprint in support of research and user studies (see Figure 40). As with CAVE2, SAGE2 (and now SAGE3) is the underpinning middleware driving the displays and user interactions. Continuum is in continual use for CS and Design courses and weekly meetings. Other campus units book the room for meetings and to showcase EVL's visualization and collaboration technologies to campus visitors.
Figure 39.

Class being held in Continuum, 2016.

Figure 39.

Class being held in Continuum, 2016.

Close modal
Figure 40.

Class being held in Continuum, 2019.

Figure 40.

Class being held in Continuum, 2019.

Close modal

5.1.3 VR and AR Headsets

In the late 2010s, a wave of less expensive VR and Augmented Reality (AR) headsets brought VR back into the spotlight and reignited interest in the field. EVL purchased several different HMDs and provided a dedicated space in the main lab for use in coursework and research. The current generation of headsets is still too bulky and heavy to wear for long periods, but inevitably, as with most technologies, HMDs will evolve into lighter and less constraining devices in support of longer use.

In 2016, for the NASA-funded project SIMPLE (Sub-ice Investigation of Marine and Planetary-analog Ecosystems), the successor of ENDURANCE described above, EVL used HMDs so collaborators could view processed data visualizations without making a special trip to see them in the CAVE2 (see Figure 41) (EVL, 2013b). Users could swim through visualizations of the underside of the Ross Ice Shelf in Antarctica, captured with sonar by an autonomous underwater vehicle (AUV), follow the AUV's path (yellow line), and view visualizations of salinity, pressure, conductivity, and oxygen concentrations.
Figure 41.

SIMPLE, viewed using an HMD.

Figure 41.

SIMPLE, viewed using an HMD.

Close modal
For over 20 years, EVL has collaborated with physical and occupational therapists at the Shirley Ryan AbilityLab (formerly the Rehabilitation Institute of Chicago) on immersive rehabilitation (Tsoupikova et al., 2015) and serious game applications (Naik, 2019) for patients with physical impairments. These applications typically have a collaborative component; for example, the patient uses a VR game to exercise while the therapist observes what the patient sees and how they respond, and then modifies, in real time, the effort required by the patient to compensate if the game is too hard or too easy. Today, these applications increasingly rely upon HMDs (see Figure 42).
Figure 42.

Using HMDs for rehabilitation.

Figure 42.

Using HMDs for rehabilitation.

Close modal
AR headsets can augment outdoor and indoor locations with personalized information or augment 2D displays with 3D. In 2016, EVL worked with the Chicago History Museum and the School of the Art Institute on the mobile AR application Eastland Disaster (see Figure 43), letting people walking along Chicago's Riverwalk use their smart phones or tablets to point to specific locations and see actual photos superimposed, showing the Eastland ship and where it submerged in 1915, killing 844 passengers on board. (Cavallo et al., 2016).
Figure 43.

Eastland Disaster photos superimposed on Chicago's Riverwalk.

Figure 43.

Eastland Disaster photos superimposed on Chicago's Riverwalk.

Close modal

5.1.4 3D Cameras

In 2014, EVL and Calit2-Qualcomm Institute received NSF funding to develop SENSEI (Sensor Environment Imaging), the next-generation CAVEcam (see Figure 44) (see Sections 3.1.1 and 5.4). SENSEI was conceived as a reconfigurable, ultra-high-resolution, spherical, real-time, data-acquisition, sensor-based camera system whose images could be viewed in large-scale VR systems. By 2019, the team successfully developed StarCAM, a hardware camera prototype (see Figures 45 and 46), and made inroads developing stitching software to handle the volume of video imagery generated (Wang et al., 2018; Wang, 2019; Meyer et al., 2019).
Figure 44.

CAVEcam3.

Figure 45.

SENSEI: StarCAM closeup without cover.

Figure 45.

SENSEI: StarCAM closeup without cover.

Close modal
Figure 46.

SENSEI: StarCAM v2 setup.

Figure 46.

SENSEI: StarCAM v2 setup.

Close modal

5.2 SAGE Collaboration and Visualization Software

In 2013, EVL received NSF funding to develop SAGE2 (Scalable Amplified Group Environment) (Marrinan et al., 2014; Renambot et al., 2016). SAGE2, the second-generation SAGE, was re-architected to take advantage of advances in web graphics (doing GPU accelerated graphics rendering within web browsers) and more robust PCs (where a large display could be driven by a single PC rather than a compute cluster, making it easier for scientists to acquire, use, and maintain display walls). Jason Leigh, inventor of SAGE, left UIC in 2014 for University of Hawaii at Manoa/LAVA and continues to collaborate with EVL on SAGE2, and now SAGE3 (see Section 6.1), development.

Whereas SAGE was written in C++, SAGE2 moved to a fully JavaScript code base, where all graphics were rendered in the browser. By 2018, there were over 4,000 SAGE2 users in over 800 institutions in 18 countries worldwide, in a broad range of disciplines such as: archaeology, architecture, art, atmospheric science, biology, chemistry, civil engineering, communications, computer science, design, education, geoscience, health, library science, mathematics, medical, meteorology, network engineering, neuroscience, physics, psychology, and statistics (Leigh et al., 2019; Belcaid et al., 2023).

5.3 Advanced Cyberinfrastructure: Computers and Networking

EVL continually investigates new computer technologies and has a strong working knowledge of GPUs, given all the computers it has purchased over the years to drive its visualization and VR displays (everything from mini-supercomputers to clusters to PCs). EVL's computers interface with both UIC's academic network and its research network.

In 2017, EVL was introduced to a next-generation computer architecture called composable infrastructure (Lowe, 2016) and immediately saw how it was applicable to UIC's Engineering faculty, who had GPU-hungry applications in visualization, image processing, artificial intelligence (AI)/machine learning, and the integration of AI with visualization (i.e., extracting data of interest for visualization when the data is too large to be output directly). In 2018, EVL received NSF funding for COMPaaS DLV (COMposable Platform as a Service Instrument for Deep Learning & Visualization), a computer system in which the computer's components—CPUs, several types of GPUs, storage, and networking, all interconnected with a high-bandwidth configurable fabric—formed a fluid pool of resources, such that different applications with different workflows could be run simultaneously, with each configuring the resources it required almost instantaneously, at any time. By the end of the NSF award in 2022, COMPaaS was being used by over 40 UIC College of Engineering faculty, students, and collaborators, and continues to attract new faculty and students as well as domain scientists across campus (Brown et al., 2019; Chen et al., 2022; Long et al., 2022).

COMPaaS facilitates new breakthroughs in theoretical research and challenging long-standing paradigms in visual computing. For example, several EVL projects helped prove that in interdisciplinary collaborations, Activity Centered Design leads to higher success rates than Human Centered Design (Marai, 2018). As another example, the Continuum large display helped document the fact that domain scientists prefer a Details-first Design approach to an Overview-first Design approach; for example, computational fluid dynamics collaborators were physically walking up to the Details view on EVL's large display at the start of each analysis process (Luciani et al., 2019). Other EVL works (Wentzel et al., 2020; Floricel et al., 2022) have since been included in graduate-level curricula and textbooks (Preim et al., 2023; BioMedVis, 2022).

High-speed networking continued to be of significant benefit to researchers—to collaborate over distance, share data, and have real-time access to remote computational resources. With NSF funding, UIC upgraded its campus’s research network to 100 Gbps, a 10-fold increase over the previous decade, and deployed 100 Gbps into EVL where it connects to its many devices, including CAVE2.

5.4 Community Engagement

CAVEcam, the spherical panoramic, 3D-immersive still-photography camera, was used in 2011 by Tom DeFanti and colleagues Greg Wickham and Adel Saad to capture spectacular 3D surround images of Egypt's temples at Luxor, a major world heritage site, and bring these splendors back to the United States for viewing in Calit2-Qualcomm Institute's SunCAVE and EVL's CAVE2 (Ainsworth et al., 2016). Ainsworth did the stitching for these images and dozens more CAVEcam images.

In 2014, the Orlando Science Center approached EVL and University of Central Florida about using Project Lifelike (see Section 4.1.2) to create an interactive exhibit featuring an avatar of Alan Turing, considered the father of modern computer science, to motivate kids to study CS or engineering (EVL, 2014). Turing's avatar interacted with attendees, teaching them bits of CS history and assisting them make their own avatars using a touch screen. EVL also worked with Adler Planetarium on an interactive avatar of former astronaut Jim Lovell.

In 2014, Daria Tsoupikova, EVL/design faculty, collaborated with University of Bergen (Norway) and Temple University on Hearts and Minds: The Interrogations Project, an interactive VR narrative performance in EVL's CAVE2 (Tsoupikova et al., 2017). It visualizes stories of violence and post-traumatic stress experienced by ordinary American soldiers during the American-led counterinsurgency and counter-terrorism campaigns in Iraq in the years after September 11, 2001. This artwork was subsequently moved to portable displays, domes, projection surfaces, and HMDs to reach new audiences across educational, arts, and public arenas.

In 2019, Tsoupikova collaborated with the Chicago Design Archive (CDA) and Columbia College Chicago on the Chicago Design Milestones 1920–2019, an installation visualizing the evolution of design in the CDA collection over the last 10 decades (see Figure 47) (EVL, 2019b). It was shown at 150 Media Stream, a gallery in the lobby of a downtown Chicago high-rise with a unique display wall made up of 89 LED blades stretching 150 feet long and rising 22 feet high.
Figure 47.

Chicago Design Milestones 1920–2019 installation.

Figure 47.

Chicago Design Milestones 1920–2019 installation.

Close modal

6.1 SAGE Collaboration and Visualization Software

In April 2020, LAVA, EVL, and Virginia Tech received NSF funding to develop SAGE3 (Smart Amplified Group Environment). Shortly thereafter, COVID struck. The development team recognized the sociological changes brought about by the pandemic, notably shifting the workplace to a hybrid office/home environment that required not just video-teleconferencing but data-driven tools for large, distributed groups. The team also knew that there had been several major technological changes since SAGE2 was designed in 2013; for example, web browsers and languages were maturing; scientific communities were relying on Jupyter Notebooks; and science workloads were incorporating AI algorithms and programmable infrastructure. Therefore, it was decided to re-architect SAGE3 from the ground up.

SAGE3 leverages robust JavaScript frameworks and cloud services to reimplement and improve upon SAGE2’s capabilities and adds provisions to run and manage data science and AI services. Furthermore, SAGE3 enables users to both work at home on their desktop computers/laptops and in their research laboratories on tiled display walls, while having the content seamlessly shared across infinitely sizable workspaces and infinite numbers of workspaces (Kirshenbaum et al., 2020, 2021; Harden et al., 2022; Belcaid et al., 2023).

External monitors were gaining resolution and 4K TVs had become ubiquitous; users were getting high-resolution displays on their desktops and in their living rooms, so an infinite canvas running from their laptop via SAGE3 would let collaborators bring the larger wall-scale displays they were familiar with (see Figures 48 and 49) into their home office. Upon returning to campus in mid-2022, EVL's Continuum room, small meeting rooms, and office displays were upgraded to SAGE3, making it easier to conduct hybrid classes and meetings.
Figure 48.

LAVA's SAGE Wall and laptops each access the SAGE3 infinite canvas.

Figure 48.

LAVA's SAGE Wall and laptops each access the SAGE3 infinite canvas.

Close modal
Figure 49.

EVL's SAGE Wall running SAGE3.

Figure 49.

EVL's SAGE Wall running SAGE3.

Close modal

6.2 Advanced Cyberinfrastructure: Computers and Networking

By 2021, the COMPaaS computer system was enhanced with a public-facing JupyterHub server, as Jupyter had become a common software frontend for most machine learning work. COMPaaS was also enhanced with a current generation PCI-express fabric and expansion chassis, enabling researchers to add their own accelerators or compute nodes to the pool of available resources. COMPaaS is available to users as bare metal for systems and network research applications but primarily supports containerized GPU applications using Kubernetes orchestration (Kubernetes, 2023). Users can deploy domain-specific or customized Docker containers on COMPaaS GPUs. Kubernetes orchestration manages the resource requisitions, network ingress, data storage, and container access for COMPaaS users. Jupyter Notebook implementations provide users with workflows that are reusable and easy to share among researchers. The skills learned developing codes on COMPaaS enable its users to build scalable applications faster, with knowledge that transfers directly to other research problems.

6.3 Community Engagement

Among EVL's early post-pandemic experiences was welcoming visitors to an experimental, multi-user immersive theatrical performance developed by EVL/Design's Daria Tsoupikova in collaboration with Chicago's Goodman Theatre and director Jo Cattell, called Hummingbird (see Figure 50). Hummingbird showed how VR can be an artistic, storytelling medium integrated into live performance, expanding upon traditional live theater. It used EVL's CAVE2, Continuum, and HMDs, in which several participants and a principal actor wore HMDs to participate in a collaborative, narrative performance that takes place partly in the real world and partly in VR. Additional audience members watched the story unfold in the virtual world by looking at the visuals on large displays with shared audio. Hummingbird premiered as part of the Goodman Theatre's New Stages Festival in December 2021, was shown at the SIGGRAPH 2022 conference in August 2022, and was presented at the Chicago Children's Theatre in March 2023 (Hummingbird, 2022; Tsoupikova et al., 2022).
Figure 50.

Hummingbird in EVL's Continuum room.

Figure 50.

Hummingbird in EVL's Continuum room.

Close modal
Tsoupikova and EVL created Chicago Design Through the Decades for ART on THE MART (see Figure 51), shown November–December 2022. ART on THE MART is the world's largest digital projection system that covers 2.5 acres (two football fields) of Chicago's Merchandise Mart's river-facing façade, adding to Chicago's vibrant public art scene. This event was a collaboration with EVL, the Chicago Design Archive, Columbia College Chicago, NASA, Sky White Sound, vocalist/activist Nnelolo Karen Wilson-Ama'Echefu and rapper Elijah Robb. The installation visualizes the history of Chicago design and highlights Chicago's contributions (Poet, 2022).
Figure 51.

ART on THE MART.

Figure 51.

ART on THE MART.

Close modal

7.1 Student Successes

EVL undergraduate and graduate students have won multiple Best Paper and Honorable Research Awards, NSF Graduate Research Fellowship Awards and nominations, UIC Chancellor's Undergraduate Research Awards, UIC Engineering Dean's Scholar Fellowship Award, Doctoral Colloquium selections at premier conferences, and Computing Research Association fellowships.

EVL students receive excellent jobs upon graduation, putting their skills to use in: (1) special effect houses (PDI/Dreamworks, Rhythm & Hues, Digital Domain, MetroLight Studios, Cinesite, Disney Animation, Pixar, Weta Digital); (2) the gaming industry (Amazon Games, High Voltage, Valkyrie Studios, Midway, CodeWalla); (3) companies (Amazon, Apple, Google, Time Warner TBS, General Motors, Microsoft, Exxon Mobil, Qualcomm, Motorola, Mechdyne); (4) research labs (ANL, NASA Goddard, Northrup Grumman, Qualcomm, Thompson Scientific); and (5) universities (USC Cinema, San Francisco State, University at Buffalo, Indiana University, University of Massachusetts, University of Chicago).

A common phrase upon graduation is “No one ever leaves EVL.” Fostering a sense of family, community, and inclusion has been an important part of EVL's overall success. EVL makes every effort to stay in touch with its graduates. EVL attends many major conferences, such as SIGGRAPH, SC, and IEEE Vis, where former students reunite. Alumni frequently visit EVL, give presentations about their careers, and learn about EVL's latest innovations—and some adopt them for their jobs. Maintaining these relationships provides for a close-knit network and support system for all graduates.

7.2 Industry Collaborations, Commercialization, Patents, and Licenses

EVL, as a technology enabler and disruptor, maintains a multimillion-dollar research facility. EVL has a track record of working with industry partners to help them be more competitive in the global marketplace, either by working with them to commercialize EVL inventions or by educating them on how their commercial products can be re-engineered to meet the needs of the academic research and education market.

EVL's software (including SAGE, Electro, Omegalib, CAVE2 Simulator, etc.) is made available for download from GitHub as open source for non-commercial use. For commercial use, UIC's Office of Technology Management brokers software and hardware licensing agreements with interested companies, such as Mechdyne, Vadiza, Comtex, and Multivis. EVL has had a longstanding licensing relationship with Mechdyne, who commercialized the CAVE in the 1990s and the CAVE2 in 2013.

Over the years, select EVL faculty and students have received patents for their inventions. When developing the CAVE2, EVL worked with display manufacturer Planar Systems and in 2017 received U.S. Patent No. US 9,800,862 B2 for “System and Methods for Visualizing Information.” When developing Varrier, EVL received U.S. Patent No. 8248462B2 for the “Dynamic Parallax Barrier Autosteroscopic Display System” in 2012. EVL and art(n) Laboratory, when working on PHSColograms, received Patent No. 5,113,213 in 1992 and Patent No. RE35029 in 1995 for “Computer-generated autostereography method and apparatus.”

Here are some reflections from EVL interim director Andy Johnson, who has been at EVL for 30 years. These lessons learned have been substantiated throughout this paper.

  • Keep your friends close and your hardware (toys) closer. Having everyone co-located with the equipment they use makes so many things simpler.

  • Flexibility is important. Having a varied, but overlapping, research portfolio helps attract new funding, faculty members, and students, as research topics evolve over time.

  • Use the tools you develop regularly to work out the bugs and keep the tools current.

  • Family is important and EVL is family. Faculty, staff, and students all have desks and access to meeting rooms and a common kitchen area with an espresso machine and tea kettle! Having faculty and students casually talk with one another at different stages in their careers and learn how they are coping with issues gives a strong sense of continuity and a path forward.

  • Embrace evolution! Given the rapid pace with which technology ages and students and staff move on, plan to update or replace computers, display screens, and software on a regular basis, prioritizing needs and available funding. For example, the lifespan of a large display at EVL is typically 10 years before newer technology is available to support the Lab's evolving research focus.

  • Create tools and techniques that help people, in collaboration with the people being helped.

  • Being affiliated with at least one academic department is important. EVL's affiliation with the CS department provides a steady stream of new students and access to other faculty members who may have new problems to solve or the knowledge to help solve new problems.

  • Pursue your passion projects, as they can lead to new research directions. Opportunities come from the strangest projects!

  • The research lab must provide researchers and students with better infrastructure than they have at home, or why expect them to come into the lab to work?

  • Develop high-quality simulators for new hardware so users can code anywhere, and then come into the laboratory to test, especially when the resource, like the CAVE or CAVE2, is a unique system and not easily shared among many users at the same time.

  • Diversity across multiple dimensions is essential. EVL has worked with people of all ages, genders, races, disabilities, and domains, from the arts and sciences to the public, and they all bring interesting perspectives to EVL's research efforts and opportunities to take it in new directions. Computer graphics/visualization appeals to people with a diversity of talents, from math and science to the arts (e.g., art, design, music, dance, sound, theater, film, etc.).

  • Understand the importance of showtime. The tech must work! Use conference demonstrations, museum exhibits, lab tours, and end-of-semester project demonstrations as reasons to harden the hardware and software so it can be used by others. Also, create expectations that these demonstrations should be flawless, or at least seem that way to visitors. Then use these demonstrations to meet new people, get feedback, and discover new problems to work on.

  • Get your work out of the lab and into the hands of actual users (scientists, students, museum goers, conference attendees).

  • Live the dream. If you don't have the future you were promised in the sci-fi movies you watched or books you read as a child, then build that future with your friends!

The world needs immersive, room-scale technologies for data analytics. People are now accustomed to larger and larger flat screen TVs and wider desktop monitors with high fidelity to support their work. New display technologies (e.g., projection-based active stereo and OLED wall active stereo) to high-definition display walls can aid researchers collaborating on complex problems—either studying one large image or visualizing multiple representations of the data with supporting information.

Thirty years of experience working with scientists and their data in a variety of domains has shown that room-scale stereoscopic displays, HMDs, smart phones, laptops, and tablets should all be integrated into larger, collaborative spaces, as it is still the only way to truly investigate data in depth. Integrating AR and VR into spaces can release analysts from being tethered to their laptops and phones, enabling them to privately view their personal data while sharing high-resolution data in multiple formats in the larger collaborative space. Further, integrating these modalities with voice and AI assistance will provide interesting research opportunities for immersive analytics spaces where the room is an active participant in the exploration. While we wait for wearable technology to improve, immersive visualization rooms outfitted with state-of-the-art tracking let us prototype that future.

Though original CAVE projects involved one to a few domain scientists, the trend over the years has been toward projects with larger and larger groups, many featuring complementary expertise (Marai et al., 2019). Due to their cost and relative rarity, high-resolution immersive analytics spaces are increasingly used by multiple groups under either a time-share or a space-share model. Under space-share models, modern spaces should accommodate seamlessly large group meetings (30–50 people) and/or multiple, simultaneous, small group meetings (5–10 people). Immersive analytics tasks require the ability to link representations together, to quickly annotate, and to brainstorm as a group, as though standing together at a whiteboard. Under time-share models, users should be able to quickly add new information to the conversation, save the state of a session and bring it back the next day, or next year, and copy information from the public displays back to their personal displays.

In addition to building on its current research, EVL wants to aggressively pursue emerging opportunities. Big Data keeps getting bigger; for example, higher-resolution scientific instrumentation, Internet of Things sensor data (edge computing), drone data, 360-degree high-resolution photography and video, and autonomous vehicle data. R&D globalization makes the world keep getting smaller, so techniques to capture, store, move, process, and analyze information continue to grow. EVL sees many opportunities and is delighted that recent CS faculty hires with knowledge of visualization, computer vision, high-performance computing, and smart cities wish to be affiliated with EVL.

UIC's CS department will move into a new state-of-the-art building, to open in 2024. EVL recently received NSF funding to design and build DOCC (Data Observation and Computation Collaboratory), to be readily accessible to all CS faculty and students. Also, EVL's Continuum room will be replicated in the new building and used as a classroom for a variety of undergraduate and graduate courses.

UIC is one of the nation's most diverse public research universities and is a federally designated Minority-Serving Institution. And more students than ever before are enrolling in CS. Increased diversity brings new opportunities to apply lessons learned over the last 50 years to new fields of interest, with new ideas coming from these new students. On the flip side, more students mean larger class sizes, making it challenging to appropriately scale spaces like Continuum and software like SAGE. Of course, more students mean more faculty hires, presenting new opportunities for collaboration.

Fifty years of the Circle Graphics Habitat and the EVL is quite a success story. Onward to the next 50 years!

Over the decades, EVL has received major funding from federal agencies (NSF, DOE, DOD, NIH, NASA, DOT, NEA), industry, and nonprofits (museums and foundations). EVL has also received major funding from Argonne National Laboratory, the State of Illinois, UIC, and Pacific Interface on behalf of NTT Network Innovation Laboratories in Japan. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the funding agencies and companies.

SAGE3 and DOCC, mentioned in this paper, currently receive major NSF funding, notably award 2320261 for DOCC, and awards 2004014, 2003800, and 2003387 for SAGE3.

SAGE, SAGE2, SAGE3, CAVE, CAVE2, ImmersaDesk, and ImmersaDesk2 are trademarks of the University of Illinois Board of Trustees.

PHSCologram is a trademark of (art)n Laboratory, Chicago, Illinois.

The copyright for all photos and diagrams in this paper is held by the Electronic Visualization Laboratory, University of Illinois Chicago.

Ainsworth
,
R. A.
,
Sandin
,
D. J.
,
Schulze
,
J. P.
,
Prudhomme
,
A.
,
DeFanti
,
T. A.
, &
Srinivasan
,
M.
(
2011
).
Acquisition of stereo panoramas for display in VR environments
.
Proceedings SPIE 7864, Three-Dimensional Imaging, Interaction, and Measurement
,
786416
.
Ainsworth
,
R. A.
,
DeFanti
,
T. A.
,
Kuester
,
F.
,
McFarland
,
C.
,
Saad
,
A.
,
Sandin
,
D. J.
, &
Wickham
,
G.
(
2016
).
CAVEcam virtual reality photography collection
.
UC San Diego Library Digital Collections
.
art(n)
&
Sandor
,
E.
(
2022a
).
What is a Phscologram (Skol-o-gram)?
https://www.artn.com/phscolograms
art(n)
&
Sandor
,
E.
(
2022b
).
Exhibitions
. https://www.artn.com/exhibitions
Bassiony
,
M. A.
,
Vesper
,
B. J.
,
Mateevitsi
,
V. A.
,
Elseth
,
K. M.
,
Colvard
,
M. D.
,
Garcia
,
K. D.
,
Leigh
,
J.
, &
Radosevich
,
J. A.
(
2014
).
Immunohistochemical evaluation of bleeding control induced by Holmium laser and Biolase dental laser as coagulating devices of incisional wounds
.
Proceedings of the UIC College of Dentistry Clinic and Research Day
. https://www.evl.uic.edu/pubs/1636
Belcaid
M.
,
Leigh
,
J.
,
Theriot
,
R.
,
Kirshenbaum
,
N.
,
Tabalba
,
R.
,
Rogers
,
M.
,
Johnson
,
A.
,
Brown
,
M.
,
Renambot
,
L.
,
Long
,
L.
,
Nishimoto
,
A.
,
North
,
C.
, &
Harden
,
J.
(
2023
).
Reflecting on the scalable adaptive graphics environment team's 20-year translational research endeavor in digital collaboration tools
.
IEEE Computing in Science & Engineering
,
25
(
2
),
50
56
.
BioMedVis
. (
2022
).
Summer school program
.
Bio+Med+Vis
. https://biomedvis.github.io/2022/announcements/program/
Borrelli
,
C.
(
May 23,
2017
).
Blueprints for “Star Wars” Death Star were created at UIC
.
Chicago Tribune
. https://www.chicagotribune.com/entertainment/movies/ct-star-wars-evl-lab-ent-0524-20170523-column.html
Bravo
,
E. G.
,
Burbano
,
A.
,
Byrd
,
V.
, &
Forbes
,
A.
(
2017
).
The Interactive Image: A media archaeology approach
.
ACM SIGGRAPH Art Gallery
, pp.
368
375
.
Brown
,
M
. (Ed.). (
1998
).
SC’98 iGrid: Empowering global research community networking
.
STAR TAP
. https://startap.net/igrid98/
Brown
,
M
. (Ed.). (
2000
).
iGrid 2000 empowering global research community networking
.
STAR TAP
. https://startap.net/igrid2000/
Brown
,
M
. (Ed.). (
2002
).
iGrid 2002 The International Virtual Laboratory
.
STAR TAP
. https://startap.net/igrid2002/
Brown
,
M
. (Ed.). (
2005
).
iGrid 2005 Global Lambda Integrated Facility
.
STAR TAP
. https://startap.net/igrid2005/program/
Brown
,
M.
,
DeFanti
,
T.
,
McRobbie
,
M.
,
Verlo
,
A.
,
Plepys
,
D.
,
McMullen
,
D.
,
Adams
,
K.
,
Leigh
,
J.
,
Johnson
,
A.
,
Foster
,
I.
,
Kesselman
,
C.
,
Schmidt
,
A.
, &
Goldstein
,
S.
(
1999
).
The International Grid (iGrid): Empowering global research community networking using high performance international internet services
.
Proceedings of INET
, pp.
3
9
. https://www.evl.uic.edu/documents/igrid99.pdf
Brown
,
M.
,
Renambot
,
L.
,
Long
,
L.
,
Bargo
,
T.
, &
Johnson
,
A.
(
2019
).
COMPaaS DLV: Composable infrastructure for deep learning in an academic research environment
.
MERIT (Midscale Education and Research Infrastructure and Tools) Community Event Workshop, 27th IEEE International Conference on Network Protocols
.
Cavallo
,
M.
,
Rhodes
,
G. A.
, &
Forbes
,
A. G.
(
2016
).
Riverwalk: Incorporating historical photographs in public outdoor augmented reality experiences
.
IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)
, pp.
160
165
.
Chau
,
D.
,
McGinnis
,
B.
,
Talandis
,
J.
,
Leigh
,
J.
,
Peterka
,
T.
,
Knoll
,
A.
,
Sumer
,
A.
,
Papka
,
M.
, &
Jellinek
,
J.
(
2012
).
A simultaneous 2D/3D autostereo workstation
.
Proceedings SPIE 8288, Stereoscopic Displays and Applications XXIII, 82882N
.
Chen
,
Y.-C.
(
2011
).
Immersive empathic design for interdisciplinary collaborations
.
Doctoral dissertation,
University of Illinois Chicago
. https://www.evl.uic.edu/pubs/1603
Chen
,
Z.
,
Renambot
,
L.
,
Long
,
L.
,
Brown
,
M.
, &
Johnson
,
A. E.
(
2022
).
Moving from composable to programmable
.
First Workshop on Composable Systems (COMPSYS ’22), Held in Conjunction with the 36th IEEE International Parallel & Distributed Processing Symposium Workshops (IPDPSW)
, pp.
1215
1220
.
Covi
,
L. M.
,
Olson
,
J. S.
,
Rocco
,
E.
,
Miller
,
W. J.
, &
Allie
,
P.
(
1998
).
A room of your own: What do we learn about support of teamwork from assessing teams in dedicated project rooms?
Cooperative Buildings: Integrating Information, Organization, and Architecture. CoBuild 1998
.
Lecture Notes in Computer Science, Vol. 1370
, (pp.
53
65
).
Cruz-Neira
,
C.
,
Sandin
,
D.
,
DeFanti
,
T.
,
Kenyon
,
R.
, &
Hart
,
J.
(
1992
).
The CAVE audio visual experience automatic virtual environment
.
Communications of the ACM
,
35
(
6
),
65
72
.
Cruz-Neira
,
C.
,
Sandin
,
D.
, &
DeFanti
,
T.
(
1993
).
Surround-screen projection-based virtual reality: The design and implementation of the CAVE
.
Proceedings of SIGGRAPH Computer Graphics Conference
, pp.
35
142
.
Cruz-Neira
,
C.
,
Leigh
,
J.
,
Papka
,
M.
,
Barnes
,
C.
,
Cohen
,
S. M.
,
Das
,
S.
,
Engelmann
,
R.
,
Hudson
,
R.
,
Roy
,
T.
,
Siegel
,
L.
,
Vasilakis
,
C.
,
DeFanti
,
T. A.
, &
Sandin
,
D. J
. (
1993
).
Scientists in wonderland: A report on visualization applications in the CAVE virtual reality environment
.
Proceedings of IEEE Research Properties in Virtual Reality Symposium
, pp.
59
66
.
Czernuszenko
,
M.
,
Pape
,
D.
,
Sandin
,
D.
,
DeFanti
,
T.
,
Dawe
,
G.
, &
Brown
,
M.
(
1997
).
The ImmersaDesk and Infinity Wall projection-based virtual reality displays
.
ACM SIGGRAPH Computer Graphics
,
31
(
2
),
46
49
.
Davis
,
B.
(
2004
).
Affordable system for viewing spatial data in stereo
.
ArcUser, July–September 2004
,
48
49
. https://www.esri.com/news/arcuser/0704/files/geowall.pdf
DeFanti
,
T. A.
(
1976
).
The digital component of the circle graphics habitat
.
AFIPS '76: Proceedings of the National Computer Conference and Exposition
, pp.
195
203
.
DeFanti
,
T. A.
,
Sandin
,
D. J.
, &
Nelson
,
T. H.
(
1975
).
Computer graphics as a way of life
.
Computers & Graphics
,
1
(
1
),
9
15
.
DeFanti
,
T. A.
,
Fenton
,
J.
, &
Donato
,
N.
(
1978
).
BASIC Zgrass—A sophisticated graphics language for the Bally Home Library Computer
.
Proceedings of the 5th Annual Conference on Computer Graphics and Interactive Techniques, ACM SIGGRAPH
,
12
(
3
),
33
37
.
DeFanti
,
T. A.
,
Sandin
,
D.
, &
Cruz-Neira
,
C.
(
1993
).
A “room” with a “view
.”
IEEE Spectrum
,
30
(
10
),
30
33
.
DeFanti
,
T. A.
,
Foster
,
I.
,
Papka
,
M. E.
,
Stevens
,
R.
, &
Kuhfuss
,
T.
(
1996
).
Overview of the I-Way: Wide-area visual supercomputing
.
The International Journal of High Performance Computing Applications
,
10
(
2–3
),
123
131
.
DeFanti
,
T.
,
de Laat
,
C.
,
Mambretti
,
J.
,
Neggers
,
K.
, &
St. Arnaud
,
B.
(
2003
).
TransLight: A global-scale LambdaGrid for e-science
.
Communications of the ACM
,
46
(
11
),
34
41
.
DeFanti
,
T. A.
,
Dawe
,
G.
,
Sandin
,
D. J.
,
Schulze
,
J. P.
,
Otto
,
P.
,
Girado
,
J.
,
Kuester
,
F.
,
Smarr
,
L.
, &
Rao
,
R.
(
2009
).
The StarCAVE, a third-generation CAVE and virtual reality OptIPortal
.
Future Generation Computer Systems
,
25
(
2
),
169
178
.
DeFanti
,
T.
,
Acevedo
,
D.
,
Ainsworth
,
R.
,
Brown
,
M. D.
,
Cutchin
,
S.
,
Dawe
,
G.
,
Doerr
,
K.
,
Johnson
,
A. E.
,
Knox
,
C.
,
Kooima
,
R.
,
Kuester
,
F.
,
Leigh
,
J.
,
Long
,
L
,
Otto
,
P.
,
Petrovic
,
V.
,
Ponto
,
K.
,
Prudhomme
,
A.
,
Rao
,
R.
,
Renambot
,
L.
,
Sandin
,
D.
,
Schulze
,
J.
,
Smarr
,
L.
,
Srinivasan
,
M.
,
Weber
,
P.
, &
Wickham
,
G.
(
2011
).
The future of the CAVE
.
Central European Journal of Engineering
,
1
(
1
),
16
37
. https://www.evl.uic.edu/pubs/1591
EVL
. (
1973
).
Sandin Image Processor
. EVL University of Illinois Chicago. https://www.evl.uic.edu/research/1934
EVL
. (
1974
).
GRASS (GRAphics Symbiosis System)
. EVL University of Illinois Chicago. https://www.evl.uic.edu/research/1935
EVL
. (
1991
).
CAVE™ Library
. EVL University of Illinois Chicago. https://www.evl.uic.edu/research/1778
EVL
. (
1992
).
The Virtual Director
. EVL University of Illinois Chicago. https://www.evl.uic.edu/research/1706
EVL
. (
1993
).
The CAVE® Simulator: Software simulation for VR development [Video]
. YouTube. https://youtu.be/J4PVMJREnDY?si=izKO_KROIuAlKTja
EVL
. (
1996
).
ImmersaDesk2™
. EVL University of Illinois Chicago. https://www.evl.uic.edu/research/1771
EVL
. (
1998a
).
The International Grid (iGrid) 1998
. EVL University of Illinois Chicago. https://www.evl.uic.edu/events/335
EVL
. (
1998b
).
XP: An authoring system for immersive art exhibitions
. EVL University of Illinois Chicago. https://www.evl.uic.edu/pubs/1226
EVL
. (
2000
).
AccessBot: An enabling technology for telepresence
. EVL University of Illinois Chicago. https://www.evl.uic.edu/pubs/1305
EVL
. (
2002
).
GeoWall 2
. EVL University of Illinois Chicago. https://www.evl.uic.edu/research/1863
EVL
. (
2004a
).
VROOM – The Virtual Reality Room: Exhibition at SIGGRAPH
. EVL University of Illinois Chicago. https://www.evl.uic.edu/events/548
EVL
. (
2004b
).
LambdaVision
. EVL University of Illinois Chicago. https://www.evl.uic.edu/research/1874
EVL
. (
2004c
).
CAVEwave™ end-to-end 10 Gbps wavelength inaugurates National LambdaRail
. EVL University of Illinois Chicago. https://www.evl.uic.edu/events/580
EVL
. (
2004d
).
CoreWall
. EVL University of Illinois Chicago. https://www.evl.uic.edu/research/1883
EVL
. (
2004e
).
Electro
. EVL University of Illinois Chicago. https://www.evl.uic.edu/research/1897
EVL
. (
2006
).
Ygdrasil
. EVL University of Illinois Chicago. https://www.evl.uic.edu/yg/
EVL
. (
2007a
).
Virtual pelvic floor [Video]
.
YouTube
. https://youtu.be/Ak6l7OC0SpY
EVL
. (
2007b
).
Virtual prosthesis design [Video]
.
YouTube
. https://youtu.be/cr4u69r4kn8
EVL
. (
2008a
).
SIGGRAPH Showcase 1992 and CAVE documentation—Part 1 [Video]
.
YouTube
. https://youtu.be/NI6DsamNsLM?si=sAwXbs6xE1da5TcN
EVL
. (
2008b
).
SIGGRAPH Showcase 1992 and CAVE documentation—Part 2 [Video]
.
YouTube
. https://youtu.be/S8pBnuB5rys?si=XWtzNWiC0inUcN5e
EVL
. (
2008c
).
SC08 Bandwidth Challenge (BWC) showcases global visualcasting
. EVL University of Illinois Chicago. https://www.evl.uic.edu/events/860
EVL
. (
2011a
).
Particle dreams in spherical harmonics
. EVL University of Illinois Chicago. https://www.evl.uic.edu/art/2039
EVL
. (
2011b
).
Immersive empathic design for interdisciplinary collaborations
. EVL University of Illinois Chicago. https://www.evl.uic.edu/events/1014
EVL
. (
2013a
).
Interactive image [Video]
.
YouTube
. https://youtu.be/OQzaMbq71pM
EVL
. (
2013b
).
SIMPLE project (Sub-Ice Investigation of Marine and Planetary-Analog Ecosystems)
. EVL University of Illinois Chicago. https://www.evl.uic.edu/research/2329
EVL
. (
2014
).
EVL's Alan Turing avatar educates at Orlando Science Center
. EVL University of Illinois Chicago. https://www.evl.uic.edu/events/1174
EVL
. (
2019a
).
Spiral PTL (Perhaps the Last) [Video]
.
YouTube
. https://youtu.be/Em3zVP1DAig
EVL
. (
2019b
).
Chicago Design Milestones 1920–2019 installation opens July 1, 2019
. EVL University of Illinois Chicago. https://www.evl.uic.edu/events/2449
EVL
. (
2022a
).
SIGGRAPH 92 Showcase [Video]
.
YouTube
. https://www.youtube.com/watch?v=2fsaIT-HUdc
EVL
. (
2022b
).
Supercomputing 1995 (SC 95) I-WAY [Video]
.
YouTube
. https://www.youtube.com/watch?v=9ttKeNJH8aM
Febretti
,
A.
,
Mateevitsi
,
V. A.
,
Chau
,
D.
,
Nishimoto
,
A.
,
McGinnis
,
B.
,
Misterka
,
J.
,
Johnson
,
A.
, &
Leigh
,
J.
(
2011
).
The OmegaDesk: Towards a hybrid 2D and 3D work desk
.
Advances in Visual Computing. ISVC 2011. Lecture Notes in Computer Science, Vol. 6939
.
Febretti
,
A.
,
Nishimoto
,
A.
,
Thigpen
,
T.
,
Talandis
,
J.
,
Long
,
L.
,
Pirtle
,
J. D.
,
Peterka
,
T.
,
Verlo
,
A.
,
Brown
,
M.
,
Plepys
,
D.
,
Sandin
,
D.
,
Renambot
,
L.
,
Johnson
,
A.
, &
Leigh
,
J
. (
2013
).
CAVE2: A hybrid reality environment for immersive simulation and information analysis
. In
M.
Dolinsky
&
I. E.
McDowall
(Eds.)
,
Proceedings of SPIE 8649, The Engineering Reality of Virtual Reality
,
864903
.
Febretti
,
A.
,
Nishimoto
,
A.
,
Mateevitsi
,
V.
,
Renambot
,
L.
,
Johnson
,
A.
, &
Leigh
,
J.
(
2014
).
Omegalib: A multi-view application framework for hybrid reality display environments
.
IEEE Virtual Reality (VR)
, pp.
9
14
.
Floricel
,
C.
,
Nipu
,
N.
,
Biggs
,
M.
,
Wentzel
,
A.
,
Canahuate
,
G.
,
Van Dijk
,
L.
,
Mohamed
,
A.
,
Fuller
,
C. D.
, &
Marai
,
G. E.
(
2022
).
THALIS: Human-machine analysis of longitudinal symptoms in cancer therapy
.
IEEE Transactions on Visualization and Computer Graphics
,
28
(
1
),
151
161
.
Fox, T.
(
2013
).
New WAVE display technology rises at UCSD
.
UC San Diego Today
. https://today.ucsd.edu/story/new_wave_display_technology_rises_at_ucsd
GLIF
. (
2011
).
GLIF: Connecting research worldwide with lightpaths
. Global Lambda Integrated Facility. https://www.glif.is/publications/info/brochure.pdf
GLIF
. (
2019
).
Global Lambda Integrated Facility
. https://www.glif.is/
Gonzalez
,
A. J.
,
Leigh
,
J.
,
DeMara
,
R. F.
,
Johnson
,
A. E.
,
Jones
,
S.
,
Lee
,
S.
,
Hung
,
V.
,
Renambot
,
L.
,
Leon-Barth
,
C.
,
Brown
,
M.
,
Elvir
,
M.
,
Hollister
,
J.
, &
Kobosko
,
S.
(
2013
).
Passing an enhanced Turing test—Interacting with lifelike computer representations of specific individuals
.
Journal of Intelligent Systems
,
22
(
4
),
365
415
.
GRP
. (
2019
).
The Global Research Platform
. https://www.theglobalresearchplatform.net/
Hanula
,
P.
,
Piekutowski
,
K.
,
Uribe
,
C.
,
Almryde
,
K.
,
Nishimoto
,
A.
,
Aguilera
,
J.
, &
Marai
,
G. E.
(
2015
).
Cavern halos: Exploring spatial and nonspatial cosmological data in an immersive virtual environment
.
IEEE Scientific Visualization Conference (SciVis)
, pp.
87
99
.
Harden
,
J.
,
Christman
,
E.
,
Kirshenbaum
,
N.
,
Wenskovitch
,
J.
,
Leigh
J.
, &
North
,
C.
(
2022
).
Exploring organization of computational notebook cells in 2D space
.
IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)
, pp.
1
6
.
Hummingbird
. (
2022
).
Hummingbird: Collaborative virtual reality for tele-immersive interactive theatre
. Hummingbird VR. https://hummingbirdvr.com/
Imai
,
T.
,
Johnson
,
A. E.
,
Leigh
,
J.
,
Pape
,
D. E.
, &
DeFanti
,
T. A.
(
1999
).
The virtual mail system
.
Proceedings IEEE Virtual Reality (Cat. No. 99CB36316)
,
78
.
Ismael
,
K. E.
(
2019
).
From star to sun: The Qualcomm Institute's CAVE expands
.
UC San Diego News Center
. https://library.ucsd.edu/dc/object/bb0622942f/_1.pdf
Jeong
,
B.
,
Leigh
,
J.
,
Johnson
,
A.
,
Renambot
,
L.
,
Brown
,
M.
,
Jagodic
,
R.
,
Nam
,
S.
, &
Hur
,
H.
(
2010
).
Ultrascale collaborative visualization using a display-rich global cyberinfrastructure
.
IEEE Computer Graphics and Applications
,
30
(
3
),
71
83
.
Johnson
,
A.
, &
Leigh
,
J.
(
2001
).
Tele-immersive collaboration in the CAVE research network
. In
E. F.
Churchill
,
D. N.
Snowdon
, &
A. J.
Munro
(Eds.)
,
Collaborative virtual environments. Computer Supported Cooperative Work
(pp. 225–243).
Springer
.
Johnson
,
A.
,
Leigh
,
J.
,
Morin
,
P.
, &
Van Keken
,
P.
(
2006
).
GeoWall: Stereoscopic visualization for geoscience research and education
.
IEEE Computer Graphics and Applications
,
26
(
6
),
10
14
.
Kahler
,
E.
,
Hur
,
H.
,
Reda
,
K.
,
Kooima
,
R.
,
Chau
,
D.
,
Jagodic
,
R.
,
Renambot
,
L.
,
Johnson
,
A.
, &
Leigh
,
J.
(
2009
).
Design challenges in creating a high resolution LCD-based multi-touch display
.
Technical paper
. University of Illinois, Electronic Visualization Laboratory. https://www.evl.uic.edu/pubs/1579
Kirshenbaum
,
N.
,
Kobayashi
,
D.
,
Belcaid
,
M.
,
Leigh
,
J.
,
Renambot
,
L.
,
Burks
,
A.
,
Bharadwaj
,
K.
,
Long
,
L.
,
Brown
,
M.
,
Haga
,
J.
, &
North
,
C.
(
2020
).
The Smart Amplified Group Environment
.
4th Workshop on Immersive Analytics: Envisioning Future Productivity for Immersive Analytics, CHI
. https://drive.google.com/file/d/1VFlTToKKDuUz79BlWSHt7VnxQ-y-5AcF/view
Kirshenbaum
,
N.
,
Davidson
,
K.
,
Harden
,
J.
,
North
,
C.
,
Kobayashi
,
D.
,
Theriot
,
R.
,
Tabalba
,
R. S.
,
Rogers
,
M. L.
,
Belcaid
,
M.
,
Burks
,
A. T.
,
Bharadwaj
,
K. N.
,
Renambot
,
L.
,
Johnson
,
A. E.
,
Long
,
L.
, &
Leigh
,
J.
(
2021
).
Traces of time through space: Advantages of creating complex canvases in collaborative meetings
.
Proceedings of the ACM Human-Computer Interaction, 5(ISS), Article #502
, pp.
1
20
.
Krumbholz
,
C.
,
Leigh
,
J.
,
Johnson
,
A.
,
Renambot
,
L.
, &
Kooima
,
R.
(
2005
).
Lambda Table: High resolution tiled display table for interacting with large visualizations
.
Workshop for Advanced Collaborative Environments
. https://www.evl.uic.edu/documents/lambdatable_krumbholz.pdf
Kubernetes
. (
2023
).
Kubernetes Overview
. Kubernetes. https://kubernetes.io/docs/concepts/overview/
Lascara
,
C.
,
Wheless
,
G.
,
Cox
,
D.
,
Patterson
,
R.
,
Levy
,
S.
,
Johnson
,
A.
, &
Leigh
,
J.
(
1999
).
Tele-immersive virtual environments for collaborative knowledge discovery
.
Proceedings of the Advanced Simulation Technologies Conference
. https://www.evl.uic.edu/documents/knowledge.pdf
Lehner
,
V. D.
, &
DeFanti
,
T. A.
(
1997
).
Distributed virtual reality: Supporting remote collaboration in vehicle design
.
IEEE Computer Graphics and Applications
,
17
(
2
),
13
17
.
Leigh
,
J.
(
1998
).
Exploring CAVERNsoft Tele-Immersive Collaboratories through the iGrid Portal
. STAR TAP. https://startap.net/igrid98/explorCavernSoft98.html
Leigh
,
J.
, &
Brown
,
M. D.
(
2008
).
Cyber-commons: Merging real and virtual worlds
.
Communications of the ACM
,
51
(
1
),
82
85
.
Leigh
,
J.
,
Johnson
,
A.
, &
DeFanti
,
T.
(
1997
).
CAVERN: A distributed architecture for supporting scalable persistence and interoperability in collaborative virtual environments
.
Virtual Reality: Research, Development and Applications
,
2
(
2
),
217
237
. https://www.evl.uic.edu/aej/papers/CAVERN99vrs.pdf
Leigh
,
J.
,
Johnson
,
A.
,
Renambot
,
L.
,
Peterka
,
T.
,
Jeong
,
B.
,
Sandin
,
D. J.
,
Talandis
,
J.
,
Jagodic
,
R.
,
Nam
,
S.
,
Hur
,
H.
, &
Sun
,
Y.
(
2013
).
Scalable resolution display walls
.
Proceedings of the IEEE
,
101
(
1
),
115
129
.
Leigh
,
J.
,
Kobayashi
,
D.
,
Kirshenbaum
,
N.
,
Wooton
,
T.
,
Gonzalez
,
A.
,
Renambot
,
L.
,
Johnson
,
A.
,
Brown
,
M.
,
Burks
,
A.
,
Bharadwaj
,
K.
,
Nishimoto
,
A.
,
Long
,
L.
,
Haga
,
J.
,
Burns
,
J.
,
Cristobal
,
F.
,
McLean
,
J.
,
Pelayo
,
R.
, &
Belcaid
,
M.
(
2019
).
Usage patterns of wideband display environments in e-science research, development and training
.
15th International Conference on eScience
, pp.
301
310
.
Long
,
L.
,
Bargo
,
T.
,
Renambot
,
L.
,
Brown
,
M.
, &
Johnson
,
A. E.
(
2022
).
Composable infrastructures for an academic research environment: Lessons learned
.
First Workshop on Composable Systems, 36th IEEE International Parallel & Distributed Processing Symposium Workshops
, pp.
1209
1214
.
Lowe
,
S. D.
(
2016
).
Composable Infrastructure for Dummies, HPE Special Edition
.
John Wiley & Sons, Inc
. https://www.amazon.com/Composable-Infrastructure-HPE-Special-Custom/dp/1119288541
Luciani
,
T.
,
Burks
,
A.
,
Sugiyama
,
C.
,
Komperda
,
J.
, &
Marai
,
G. E.
(
2019
).
Details-first, show context, overview last: Supporting exploration of viscous fingers in large-scale ensemble simulations
.
IEEE Transactions on Visualization and Computer Graphics
,
25
(
1
),
1225
1235
.
Luman
,
S.
(
2011
).
Future world
.
UIC Alumni Magazine, Winter 2011
,
24–30
. https://www.evl.uic.edu/documents/2011-uic-alumni_futureworld_winter2011-2.pdf
Marai
,
G. E.
(
2018
).
Activity-centered domain characterization for problem-driven scientific visualization
.
IEEE Transactions on Visualization and Computer Graphics
,
24
(
1
),
913
922
.
Marai
,
G. E.
,
Forbes
A. G.
, &
Johnson
,
A.
(
2016
).
Interdisciplinary immersive analytics at the Electronic Visualization Laboratory: Lessons learned and upcoming challenges
.
Workshop on Immersive Analytics
, pp.
54
59
.
Marai
,
G. E.
,
Leigh
,
J.
, &
Johnson
,
A.
(
2019
).
Immersive analytics lessons from the Electronic Visualization Laboratory: A 25-year perspective
.
IEEE Computer Graphics and Applications
,
39
(
3
),
54
66
.
Marrinan
,
T.
,
Aurisano
,
J.
,
Nishimoto
,
A.
,
Bharadwaj
,
K.
,
Mateevitsi
,
V.
,
Renambot
,
L.
,
Long
,
L.
,
Johnson
,
A.
, &
Leigh
,
J.
(
2014
).
SAGE2: A new approach for data intensive collaboration using scalable resolution shared displays
.
10th IEEE International Conference on Collaborative Computing: Networking, Applications and Worksharing
, pp.
177
186
.
Mason
,
J.
,
Czernuszenko
,
M.
,
Plepys
,
D.
, &
DeFanti
,
T.
(
1994
).
CAVEview: Mosaic-based virtual reality
.
Proceedings of the Second World Wide Web Conference: Mosaic and the Web
. https://www.evl.uic.edu/pubs/1281
McCormick
,
B.
,
DeFanti
,
T.
, &
Brown
,
M.
, Eds. (
1987
).
Visualization in scientific computing
.
ACM SIGGRAPH Newsletter
,
21
(
6
). https://www.evl.uic.edu/documents/visc-1987.pdf
Meyer
,
D.
,
Sandin
,
D.
,
McFarland
,
C.
,
Lo
,
E.
,
Dawe
,
G.
,
Wang
,
H.
,
Dai
,
J.
,
Brown
,
M.
,
Nguyen
,
T.
,
Baker
,
H.
,
Kuester
,
F.
, &
DeFanti
,
T.
(
2019
).
StarCAM—A 16K stereo panoramic video camera with a novel parallel interleaved arrangement of sensors
.
Stereoscopic Displays and Applications XXX, IS&T International Symposium on Electronic Imaging
, pp.
646-1–646-8
.
Meyers
,
S. B.
,
Sandin
,
D. J.
,
Cunnally
,
T.
,
Sandor
,
E. R.
, &
DeFanti
,
T. A.
(
1990
).
New advances in computer-generated barrier-strip autostereography
.
Proceedings SPIE 1256, Stereoscopic Displays and Applications
.
MREN
. (
2015
).
MREN: Metropolitan Research & Education Network
.
Metropolitan Research & Education Network
. http://mren.org/
Naik
,
A.
,
HuynhNguyen
,
H.
,
Jones
,
S.
,
Patton
,
J.
, &
Kenyon
,
R.
(
2019
).
Virtual slots game for rehabilitation exercises
.
Proceedings of the IEEE Games, Entertainment, Media Conference (GEM)
. https://ieeexplore.ieee.org/document/8811546
Nishimoto
,
A.
(
2020
).
Omicron
. GitHub. https://github.com/uic-evl/omicron
NOVA
. (
2011
),
Can we live forever? Profile: Jason Leigh [Video]
.
NOVA scienceNOW
. https://www.pbs.org/wgbh/nova/video/profile-jason-leigh/
NSF
(
2003
).
NSF and high-performance networking infrastructure
.
National Science Foundation
. https://www.nsf.gov/news/news_summ.jsp?cntn_id=103049
Olson
,
J.
,
Covi
,
L.
,
Rocco
,
E.
,
Miller
,
W.
, &
Allie
,
P.
(
1998
).
A room of your own: What would it take to help remote groups work as well as collocated groups?
CHI Conference Summary on Human Factors in Computing Systems
, pp.
279
280
.
Pape
,
D.
(
1996
).
A hardware-independent virtual reality development system
.
IEEE Computer Graphics and Applications
,
16
(
4
),
44
47
.
Pape
,
D.
,
Imai
,
T.
,
Anstey
,
J.
,
Roussou
,
M.
, &
DeFanti
,
T.
(
1998
).
XP: An authoring system for immersive art exhibitions
.
Proceedings of Fourth International Conference on Virtual Systems and Multimedia
. https://www.evl.uic.edu/documents/xp.pdf
Park
,
K.
,
Leigh
,
J.
,
Johnson
,
A.
,
Carter
,
B.
,
Brody
,
J.
, &
Sosnoski
,
J.
(
2001
).
Distance learning classroom using Virtual Harlem
.
Proceedings of the Seventh International Conference on Virtual Systems and Multimedia
. https://ieeexplore.ieee.org/document/969704
Peterka
,
T.
(
2007
).
Dynallax: Dynamic parallax barrier autostereoscopic display
.
Doctoral dissertation
,
University of Illinois Chicago
.
Peterka
,
T.
,
Kooima
,
R.
,
Girado
,
J.
,
Ge
,
J.
,
Sandin
,
D.
, &
DeFanti
,
T.
(
2007a
).
Evolution of the Varrier autostereoscopic VR display: 2001–2007
.
Proceedings of SPIE 6490, Stereoscopic Displays and Virtual Reality Systems XIV
,
649004
.
Peterka
,
T.
,
Kooima
,
R. L.
,
Girado
,
J. I.
,
Ge
,
J.
,
Sandin
,
D. J.
,
Johnson
,
A.
,
Leigh
,
J.
,
Schulze
,
J.
, &
DeFanti
,
T. A.
(
2007b
).
Dynallax: Solid state dynamic parallax barrier autostereoscopic VR display
.
IEEE Virtual Reality Conference
, pp.
155
162
.
Pieper
,
G.
,
DeFanti
,
T.
,
Liu
,
Q.
,
Katz
,
M.
,
Papadopoulos
,
P.
,
Keefe
,
J.
,
Hidley
,
G.
,
Dawe
,
G.
,
Kaufman
,
I.
,
Glogowski
,
B.
,
Doerr
,
K.-U.
,
Schulze
,
J. P.
,
Kuester
,
F.
,
Otto
,
P.
,
Rao
,
R.
,
Smarr
,
L.
,
Leigh
,
J.
,
Renambot
,
L.
,
Verlo
,
A.
,
Long
,
L.
,
Brown
,
M.
,
Sandin
,
D.
,
Vishwanath
,
V.
,
Kooima
,
R.
,
Girado
,
J.
, &
Jeong
,
B.
(
2009
).
Visualizing science: The OptIPuter project
.
SciDAC Review
,
12
,
32
41
. https://web.eng.ucsd.edu/∼jschulze/publications/Pieper2009.pdf
Planar
. (
2012
).
Project: University of Illinois (case study). Planar Video Wall Provides Powerful 3D Visualization Solution
.
Planar
.
Poet
,
A.
(
2022
),
Larger than life: CS faculty participate in public art project in downtown Chicago
. UIC Computer Science/College of Engineering. https://cs.uic.edu/news-stories/larger-than-life-cs-faculty-participate-in-public-art-project-in-downtown-chicago/
Preim
,
B.
,
Raidou
,
R.
,
Smit
,
N.
, &
Lawonn
,
K.
(
2023
).
Visualization, visual analytics and virtual reality in medicine: State-of-the-art Techniques and Applications
(1st Ed.). https://shop.elsevier.com/books/visualization-visual-analytics-and-virtual-reality-in-medicine/preim/978-0-12-822962-0
Ramsey
,
D.
(
2008
).
3D virtual reality environment developed at UC San Diego helps scientists innovate
.
UC San Diego Archives
. https://library.ucsd.edu/dc/object/bb2770922h/_1.pdf
Rasmussen
,
M.
,
Mason
,
T. P.
,
Millman
,
A.
,
Evenhouse
,
R.
, &
Sandin
,
D.
(
1998
).
The virtual temporal bone, a tele-immersive educational environment
.
Future Generation Computer Systems
,
14
(
1–2
),
125
130
.
Reda
,
K.
,
Febretti
,
A.
,
Knoll
,
A.
,
Aurisano
,
J.
,
Leigh
,
J.
,
Johnson
,
A.
,
Papka
,
M.
, &
Hereld
,
M.
(
2013
).
Visualizing large, heterogeneous data in hybrid-reality environments
.
IEEE Computer Graphics and Applications
,
33
(
4
),
38
48
.
Renambot
,
L.
,
Jeong
,
B.
,
Hur
,
H.
,
Johnson
,
A.
, &
Leigh
,
J.
(
2009
).
Enabling high resolution collaborative visualization in display rich virtual organizations
.
Future Generation Computer Systems
,
25
(
2
),
161
168
.
Renambot
,
L.
,
Marrinan
,
T.
,
Aurisano
,
J.
,
Nishimoto
,
A.
,
Mateevitsi
,
V.
,
Bharadwaj
,
K.
,
Long
,
L.
,
Johnson
,
A.
,
Brown
,
M.
, &
Leigh
,
J.
(
2016
).
SAGE2: A collaboration portal for scalable resolution displays
.
Future Generation Computer Systems
,
54
,
296
305
.
Sandin
,
D. J.
,
Sandor
,
E.
,
Cunnally
,
W. T.
,
Resch
,
M.
,
DeFanti
,
T. A.
, &
Brown
,
M. D.
(
1989
).
Computer-generated barrier-strip autostereography
.
Proceedings of SPIE 1083, Three-Dimensional Visualization and Display Technologies
.
Sandin
,
D.
,
Margolis
,
T.
,
Ge
,
J.
,
Girado
,
J.
,
Peterka
,
T.
, &
DeFanti
,
T.
(
2005
).
The Varrier™ autostereoscopic virtual reality display
.
ACM Transactions on Graphics
,
24
(
3
),
894
903
.
Scharver
,
C.
,
Evenhouse
,
R.
,
Johnson
,
A.
, &
Leigh
,
J.
(
2004
).
Designing cranial implants in a haptic augmented reality environment
.
Communications of the ACM
,
47
(
8
),
32
38
.
Shimizu
,
T.
,
Shirai
,
D.
,
Takahashi
,
H.
,
Murooka
,
T.
,
Obana
,
K.
,
Tonomura
,
Y.
,
Inoue
,
T.
,
Yamaguchi
,
T.
,
Fujii
,
T.
,
Ohta
,
N.
,
Ono
,
S.
,
Aoyama
,
T.
,
Herr
,
L.
,
van Osdol
,
N.
,
Wang
,
X.
,
Brown
,
M. D.
,
DeFanti
,
T. A.
,
Feld
,
R.
,
Balser
,
J.
,
Morris
,
S.
,
Henthorn
,
T.
,
Dawe
,
G.
,
Otto
,
P.
, &
Smarr
,
L.
(
2006
).
International real-time streaming of 4K digital cinema
.
Future Generation Computer Systems
,
22
(
8
),
929
939
.
Smarr
,
L. L.
,
Chien
,
A. A.
,
DeFanti
,
T.
,
Leigh
,
J.
, &
Papadopoulos
,
P. M.
(
2003
).
The OptIPuter
.
Communications of the ACM
,
46
(
11
),
58
67
.
Smarr
,
L.
,
Brown
,
M.
, &
de Laat
,
C.
(
2009
).
Special section: OptIPlanet—The OptIPuter global collaboratory
.
Future Generation Computer Systems
,
25
(
2
),
109
113
.
Smarr
,
L.
,
Herr
,
L.
,
DeFanti
,
T.
,
Ohta
,
N.
, &
Otto
,
P.
(
2007
).
CineGrid: A new cyberinfrastructure for high resolution media streaming
.
CTWatch Quarterly
,
3
(
2
). https://web.archive.org/web/20110723184101/http://www.ctwatch.org/quarterly/articles/2007/05/cinegrid/index.html
Smith
,
R.
,
Pawlicki
,
R.
,
Leigh
,
J.
, &
Brown
,
D.
(
2000
).
Collaborative VisualEyes
.
Proceedings of Immersive Projection Technology Workshop
. https://www.researchgate.net/publication/235916059_Collaborative_VisualEyes
STAR TAP
. (
1997
).
STAR TAP: An International Proving Ground for High Performance Networking
. STAR TAP. https://startap.net/startap/
STAR TAP
. (
1999
).
Euro-Link: U.S. High-Performance Internet Link to Europe
. STAR TAP. https://startap.net/euro-link/
STAR TAP
. (
2001
).
StarLight: The Optical STAR TAP
. STAR TAP. https://startap.net/starlight/
STAR TAP
. (
2004
).
TransLight/StarLight
. STAR TAP. https://startap.net/translight/
STAR TAP
. (
2015
).
StarLight SDX: StarLight software defined networking exchange for international data-intensive science research and education
. STAR TAP. https://starlightsdx.startap.net/
Sturman
,
D.
, &
Zeltzer
,
D.
(
1994
).
A survey of glove-based input
.
IEEE Computer Graphics and Applications
,
14
(
1
),
30
39
.
Svistula
,
D. N.
(
2008
).
Rain Table: Scalable architecture for group-oriented visualization of real-time geoscience phenomena
.
Master's thesis
,
University of Illinois Chicago
. https://www.evl.uic.edu/pubs/1523
Taylor
,
E.
(
2007
,
August 17
).
Access grid connects collaborators, earns R&D 100 Award
.
Argonne National Laboratory
. https://www.anl.gov/article/access-grid-connects-collaborators-earns-rd-100-award
Teasley
,
S.
,
Covi
,
L.
,
Krishnan
,
M.
, &
Olson
,
J.
(
2000
).
How does radical collocation help a team succeed?
Proceedings of the ACM Conference on Computer Supported Cooperative Work
, pp.
339
346
.
Thiebaux
,
M.
(
1997
).
Virtual Director: Steering Scientific Visualization with Virtual Camera Choreography
. Master's thesis,
EVL University of Illinois Chicago
. https://www.evl.uic.edu/thiebaux/ms/ms_thesis.html
Tsoupikova
,
D.
,
Stoykov
,
N. S.
,
Corrigan
,
M.
,
Thielbar
,
K.
,
Vick
,
R.
,
Li
,
Y.
,
Triandafilou
,
K.
,
Preuss
,
F.
, &
Kamper
,
D.
(
2015
).
Virtual immersion for post-stroke hand rehabilitation therapy
.
Annals of Biomedical Engineering
,
43
(
2
),
467
477
.
Tsoupikova
,
D.
,
Rettberg
,
S.
,
Coover
,
R.
, &
Nishimoto
,
A.
(
2017
).
Hearts and minds: The interrogations project
.
Leonardo
,
50
(
5
),
513
514
.
Tsoupikova
,
D.
,
Cattell
,
J.
,
Johnson
,
A.
,
Long
,
L.
,
Nishimoto
,
A.
, &
Jyothula Priya
,
S.
(
2022
).
Hummingbird: A collaborative live theater and virtual reality adventure
.
ACM SIGGRAPH 2022 Immersive Pavilion
,
5
,
1
2
.
Wang
,
H.
(
2019
).
Depth-constrained feature-based stitching system for stereoscopic panoramic video generation
.
Doctoral dissertation
,
University of Illinois Chicago
. https://indigo.uic.edu/articles/thesis/Depth-constrained_Feature-based_Stitching_System_for_Stereoscopic_Panoramic_Video_Generation/12481268
Wang
,
H.
,
Sandin
,
D. J.
, &
Schonfeld
,
D.
(
2018
).
Saliency-based feature selection strategy in stereoscopic panoramic video generation
.
IEEE International Conference on Acoustics, Speech and Signal Processing
, pp.
1837
1841
.
Wentzel
,
A.
,
Hanula
,
P.
,
Luciani
,
T.
,
Elhalawani
,
H.
,
Elgohari
,
B.
,
Canahuate
,
G.
,
Vock
,
D.
,
Fuller
,
C. D.
, &
Marai
,
G. E.
(
2020
).
Cohort-based T-SSIM visual computing for radiation therapy prediction and exploration
.
IEEE Transactions on Visualization and Computer Graphics
,
26
(
1
),
949
959
.
Wikipedia
. (
2019
,
September 30
).
Mark Bolas
.
Wikipedia
. https://en.wikipedia.org/wiki/Boom_Technology
Wikipedia
(
2023
,
March 27
).
CineGrid
.
Wikipedia
. https://en.wikipedia.org/wiki/CineGrid
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode