Abstract
September 2023 marks the 50th anniversary of the Electronic Visualization Laboratory (EVL) at University of Illinois Chicago (UIC). EVL's introduction of the CAVE Automatic Virtual Environment in 1992, the first widely replicated, projection-based, walk-in, virtual-reality (VR) system in the world, put EVL at the forefront of collaborative, immersive data exploration and analytics. However, the journey did not begin then. Since its founding in 1973, EVL has been developing tools and techniques for real-time, interactive visualizations—pillars of VR. But EVL's culture is also relevant to its successes, as it has always been an interdisciplinary lab that fosters teamwork, where each person's expertise contributes to the development of the necessary tools, hardware, system software, applications, and human interface models to solve problems. Over the years, as multidisciplinary collaborations evolved and advanced scientific instruments and data resources were distributed globally, the need to access and share data and visualizations while working with colleagues, local and remote, synchronous and asynchronous, also became important fields of study. This paper is a retrospective of EVL's past 50 years that surveys the many networked, immersive, collaborative visualization and VR systems and applications it developed and deployed, as well as lessons learned and future plans.
1 Introduction to EVL
1.1 The Vision
Journalist Stuart Luman (2011), in an article for UIC Alumni Magazine titled “Future World,” wrote: “A scene in Star Wars. Life-like avatars. Cave Automatic Virtual Environment. They all germinated in UIC's Electronic Visualization Lab. A place where the clock is always 15 years fast. . . . Although not as well known or well-funded as institutions such as MIT's Media Lab, EVL has a long history of radically changing the way people visualize and interact with data and technology.”
Created in 1973 by computer scientist Tom DeFanti and artist and physicist Dan Sandin as a joint program between UIC's College of Engineering/Computer Science (CS) department and the School of Art & Design, EVL was the first Science + Art program in the country leading to an MFA in Electronic Visualization or an MS or PhD in CS with a specialization in visualization. The joint degree program ended in the mid-2000s when the College of Architecture, Design and the Arts reorganized. However, EVL continued its strong working relationship with the School of Design and within the past year, UIC became the first public university in the United States to offer a Computer Science + Design (CS + DES) undergraduate degree program leading to a joint BS; the interdisciplinary program is based at EVL.
EVL's fundamental philosophy, still true today, comes from its founders. DeFanti believed in developing advanced visualization hardware and/or software systems for non-computer people, whether artists or astrophysicists. Sandin believed artists should use (or build) the most advanced technologies of their time to create their art and would often point out that Renaissance painters made their own paints. Further, DeFanti and Sandin both believed systems should be user-oriented (easy to use, easy to learn), low-cost, interactive, and real time (to provide immediate feedback). While they recognized the benefits and stunning visual quality of state-of-the-art, high-performance computer-graphics systems coming to market in the 1970s, those systems were expensive and not easily accessible to most people (DeFanti et al., 1975).
Also, given EVL's interdisciplinary collaboration focus, their goal was not to have people in each discipline learn everything about another discipline, but to learn the vocabulary of other disciplines, thereby improving communications so people could work together on joint projects. Non-computer people acquired basic programming skills. Technical people learned aesthetics. And CS and Art students learned how domain scientists approached problem solving. In the 2000s, Jason Leigh and Andy Johnson coined the term immersive empathic design, whereby EVL students got summer internships at domain scientists’ facilities to immerse themselves in scientists’ environments for on-site, hands-on experiences, observe how scientists solved problems, learn to be empathic, and determine how EVL's tools and techniques could design better systems to fit scientists’ needs and expectations (Chen, 2011; EVL, 2011b). For an example, see CoreWall (Section 4.1.2).
Today, EVL continues its focus on real-time interaction and interdisciplinary collaboration, though we use the term visual data science, which we define as real-time, interactive, multi-modal (visual, audio, haptic), networked, collaborative data visualization and immersive environments to support analytical reasoning. EVL continues to use the most advanced tools of its time, form interdisciplinary teams, and create human interface models to tackle today's grand challenges. Because EVL takes a systems approach to problem solving, it is also a technology enabler and disruptor, and has long been advancing high-performance computing and networking cyberinfrastructure.
EVL works with computer scientists, artists, designers, and domain scientists in academia, industry, government labs, and nonprofits (e.g., museums). Its inventions are open source, and some have been commercialized (Section 7.2). EVL educates industry partners on how their products can benefit the academic research market. EVL builds and supports user communities on a global scale. Most importantly, EVL amplifies its core research mission with its educational mission, applied to all workforce dimensions—education, training, advancement, and mentoring—for undergraduate and graduate students, collaborators, and the public.
1.2 Culture and Facilities
EVL is housed in the CS department. EVL's core family consists of CS visualization faculty, long-term faculty collaborators from other departments, and technical and administrative staff; they write grants to support the Lab and teach classes using the Lab's advanced technologies and inventions. However, as an interdisciplinary lab, EVL's extended family consists of faculty and student collaborators from a variety of campus units who all get keys to the Lab. All participate in weekly Tech Meetings where students give updates on their research projects. Newcomers are welcome, and those that get involved quickly become family members.
Its current location, where EVL has lived since 1991, is a multimillion-dollar facility consisting of 7,000 sq. ft. of contiguous space with large and small rooms plus an additional 3,000 sq. ft. of several large rooms located nearby. Each space has an open floor plan in which faculty, staff, students, VR/visualization systems, computers, display walls, whiteboards, desks, chairs, and tables all coexist, though an internal calendar lets people reserve a machine or a room as needed. The main lab is where new technologies, such as the CAVE and now CAVE2, live, with nearby tables and chairs for developers to sit and test their codes. Continuum, a large room with new and mature technologies, is used for meetings, classes, and art/design performances. A machine room houses high-performance computers and networks, keeping work areas quiet and cool but close by in case a machine needs rebooting. A kitchenette, reception area, office spaces, meeting rooms, rooms with desks for 12–15 graduate students, and rotating spaces for undergraduate students complete the layout.
This free form layout is deliberate and integral to EVL's culture of fostering interdisciplinary teams, open communications, meaningful collaborations, and immediate access to technology, all while preparing students for real-world experiences. This culture defines EVL as much as its pioneering work and inventions.
1.3 Leadership
September 2023 marked the 50th anniversary of EVL. Co-founder Sandin retired from UIC in 2001 and then began working part time at EVL as a Senior Research Scientist. Co-founder DeFanti retired from UIC in 2004 and became a Research Scientist at UC San Diego's California Institute for Telecommunications and Information Technology (Calit2)-Qualcomm Institute and continues to collaborate with EVL to this day. Jason Leigh became EVL director in 2004 but left UIC in 2014 to become a faculty member at University of Hawaii at Manoa, where he founded and directs the Laboratory for Advanced Visualization and Applications (LAVA) and also continues to collaborate with EVL. Maxine Brown was hired as an EVL associate director in 1986 and became EVL director in 2014 until she retired from UIC in 2020; she is now an EVL Senior Research Scientist hired back part time on grant funding. Andy Johnson is currently EVL interim director.
2 The Early Years: 1973–1989
2.1 Visualization Systems and Applications
The Chemistry department had already developed a color video introductory course for freshmen that students could watch at their own pace before taking exams. These videos used broadcast-quality TV equipment to film live, close-up laboratory experiments, and used many computer-generated animated sequences. Chemistry wanted (1) to develop new courses using a similar format, and (2) to train Math and Chemistry graduate students in the Doctor of Arts program, whose research was on the communication of knowledge within a domain, to use Chemistry's computer and video media to produce its educational materials as part of their degree requirements.
In 1977, Tom DeFanti began consulting with Dave Nutting Associates/Midway, the video game division of Bally, to develop what evolved into the Z-GRASS language (GRASS recoded to run on the Z-80 powered Z-Box, a raster graphics machine with a custom GPU) distributed with what became the Datamax UV-1, an early personal computer that output 320 × 204 color graphics to NTSC video (DeFanti et al., 1978). The UV-1 was marketed as a titling and Computer Graphics Imaging (CGI) system for cable television, but mostly purchased and cherished by artists at UIC and Chicago's School of the Art Institute (SAIC), who used it to develop interactive applications through the 1980s. Z-GRASS later became RT/1 (Z-GRASS recoded in C to be machine independent), which ran on IBM PCs with 640 × 480 video boards.
In 1984, Chicago-based (art)n Laboratory, which pioneered a photographic method to produce PHSColograms—high-resolution, backlit, immersive 3D, barrier-strip autostereographic images displayed in lightboxes—began collaborating with EVL to create them digitally (art(n) & Sandor, 2022a). They believed that computer-generated autostereograms would increase quality and lower production costs, which it did (Sandin et al., 1989; Meyers et al., 1990). Sandin and DeFanti well understood how transformative user-centered perspective, enabled by autostereograms (and VR), was compared to the linear perspective perfected by Renaissance painters that is still used today. In fact, off-axis perspective was the primary contribution to subsequent development of the CAVEs. art(n) and EVL expanded this research for many years, had their works shown in many art and science shows (art(n) & Sandor, 2022b), and received several patents (see Section 7.2). The backlit barrier-strip autostereography technology used for PHSColograms became the foundation for EVL's future research in autostereoscopic VR displays (Section 4.1.1).
In 1987, EVL produced The Interactive Image exhibition at Chicago's Museum of Science and Industry using Datamax UV-1 systems (EVL, 2013a; Bravo et al., 2017). It took a year for EVL faculty, staff, students, alumni, and friends to develop approximately 12 real-time, interactive applications and user input controls that enabled museum visitors to explore mathematics and fractals, create animations, and produce personal artworks by image processing their video-captured faces. The National Center for Supercomputing Applications (NCSA) at the University of Illinois Urbana-Champaign developed an interactive installation that let visitors work with selected scientific images, such as an astrophysical jet stream or colliding neutron stars. Many installations from The Interactive Image were next exhibited at the SIGGRAPH 1988 Art Show and, subsequently, at The Computer Museum in Boston.
2.2 Community Engagement
GRASS and the Sandin IP were used in a series of live, interactive Electronic Visualization Events (EVE) held in 1975, 1976, and 1979 (with a later event in 1995). EVE real-time visual and musical jam sessions brought together computer scientists, artists, musicians, and video experts from UIC and SAIC. Spiral PTL (Perhaps the Last), a 1980 video recording of such a performance created by Sandin, DeFanti, and Mimi Shevitz, was included in New York's Museum of Modern Art Inaugural Collection of Video Art in 1983 (EVL, 2019a).
In 1976, EVL hosted artist Larry Cuba, who had asked DeFanti if he could use EVL's computer graphics hardware and software to create the briefing room scene for the 1977 Star Wars movie. Cuba wanted access to the GRASS system because (1) the cost of commercial computer graphics systems was prohibitive for many, and (2) he knew GRASS was originally developed for artists and easy to use. May 25, 2017, marked the 40th anniversary of the theatrical release of the movie and, to highlight Chicago's unique connection to the film, Chicago Tribune journalist Christopher Borrelli (2017) published an article and video titled “Blueprints for ‘Star Wars’ Death Star were created at UIC.”
EVL founders and their students were very active in the ACM SIGGRAPH professional society and its conferences. At SIGGRAPH's first conference, held in 1974, DeFanti, Sandin, and several other attendees informally showed their films and videos. At SIGGRAPH 1975, they took over a lunchroom in a dorm at University of Ohio Bowling Green and, though still informal, enabled more people to view attendee videos on a TV and 16mm films projected on a beige wall. By SIGGRAPH 1977, DeFanti received a proper budget for audio/visual staff and equipment, and the Film & Video Show became an official conference event, held in the hotel ballroom. By 1980, DeFanti started the SIGGRAPH Video Review (SVR), a video publication documenting the computer-graphics innovations taking place. The SVR remains among the world's largest collection of computer graphics videos.
DeFanti co-chaired the SIGGRAPH 1979 conference with Bruce McCormick, UIC CS professor and Head of the Electrical Engineering and Computer Science department, which was held in the Chicago suburb of Rosemont. DeFanti also served on the organization's Executive Committee as Secretary (1978–1981), Chair (1981–1985), and Past Chair (1985–1989).
Maxine Brown was also active in the SIGGRAPH organization and conference, which is how she met DeFanti and Sandin before joining EVL in 1986. She served on the SIGGRAPH Executive Committee as Secretary (1981–1985) and Vice Chair for Operations (1985–1987), and chaired the SIGGRAPH 1992 conference, held in Chicago.
In 1986, the National Science Foundation (NSF) invited DeFanti to serve on its Panel on Graphics, Image Processing and Workstations, created to advise NSF on whether to fund computer graphics and image processing hardware and software acquisitions by research institutions doing advanced scientific computing, with particular attention to NSF-funded supercomputer centers. The Panel's recommendations resulted in the landmark 1987 Visualization in Scientific Computing report and an accompanying videotape of state-of-the-art scientific visualizations, which SIGGRAPH published (McCormick et al., 1987).
3 Scientific Accomplishments: The 1990s
3.1 VR/Visualization Systems and Applications
3.1.1 Hardware
The CAVE achieved national recognition as an excellent VR environment for computational science and engineering data. News media covering SIGGRAPH interviewed attendees as they exited the CAVE, and the response was overwhelmingly positive (EVL, 2008a; 2008b). By 1993, Larry Smarr at NCSA and Rick Stevens at Argonne National Laboratory (ANL) had built CAVEs on their campuses, as these institutions had researchers with pent-up demand to view and interact with large-scale 3D data simulations and visualizations. Also in 1993, EVL helped the US Department of Defense's Advanced Research Projects Agency (ARPA) build CAVE #4 to fit in their Star Trek-themed Enterprise room that showcased advanced technological innovations for VIP visitors.
General Motors Research immediately recognized that the CAVE would be a useful tool for vehicle design review, rapid prototyping, die and engineering analysis, and manufacturing validation, thereby eliminating the need to build life-sized models so products could come to market faster. They worked with EVL to build a CAVE while developing their own proprietary software, VisualEyes (Smith et al., 2000). The company Pyramid Systems, which was subsequently acquired by Fakespace and then Mechdyne Corporation, immediately licensed the CAVE to commercialize.
EVL staff member Greg Dawe designed and built EVL's CAVE (as well as a portable CAVE for SIGGRAPH 92) as well as its derivatives, notably the ImmersaDesk and PARIS (described below) and Varrier (Section 4.1.1). In the 2000s, Dawe joined DeFanti at Calit2-Qualcomm Institute, where he built room-sized CAVE-like displays; in particular: StarCAVE, a third-generation CAVE derivative with 360-degree screens surrounding the viewer (3 walls + door + floor), with each surface being super-high definition (4K × 4K); its successor SunCAVE, a new shape, new cluster computers, and high-resolution LCD screens; and WAVE, a curved wall array of 35 LCD monitors that end in a crest above the viewer's head and a trough at their feet (DeFanti et al., 2009, 2011; Calit2 & Ramsey, 2008; Calit2, 2013; Calit2 & Ismael, 2019).
Each CAVE generation yielded more polygons/textures in real time, more resolution per screen with improved GPUs, more screens and higher-resolution projectors, brighter screens with passive stereo TV monitors, as well as more speakers for audio, better tracking, and vastly improved software libraries. The CAVEcam stereo camera system (Section 4.1.3), developed in 2009, provided far better input image resolution capturing 360 × 180-degree composite images of approximately half-a-Gigapixel resolution. The SunCAVE was built to be the viewing device for CAVEcam images since the older CAVEs were not high-enough resolution.
3.1.2 Software
Initially, EVL wrote its own software libraries, so developers had a common platform on which to develop applications. These libraries included TrackD, to access tracking and user interaction (buttons, joystick) information, CAVELib (CAVE Library) to deal with the graphics and interaction, and various libraries to produce and control audio (EVL, 1991). In 1993, the CAVE Simulator greatly simplified application development (EVL, 1993; Pape, 1996). It ran on personal workstations, like the Silicon Graphics Inc. (SGI) Indy, enabling users to first develop applications remotely and then physically access the CAVE for final testing and demonstration.
Early VR application development was done in C and GL before moving to OpenGL, higher-level toolkits such as SGI Inventor and SGI Performer, and then leveraging external libraries such as the Visualization Tool Kit (VTK) in the latter half of the decade. The introduction of consumer graphics cards started the movement away from specialized graphics hardware to high-end PCs and PC clusters that would drive EVL's displays in the 2000s.
While high-level libraries like Open Inventor and Performer made application development easier, it was a high barrier for artists to develop CAVE applications. Higher-level libraries based on Performer, such as EVL's XP (EVL, 1998b; Pape et al., 1998) and Ygdrasil (EVL, 2006) enabled artists to create VR experiences, a development that would continue with libraries built on top of Unity in later decades.
In 1993, NCSA introduced Mosaic, the world's first visual/video web browser. To introduce Mosaic to US federal agencies, EVL packaged up a 300-MB file of SIGGRAPH 1992 Showcase content (Section 3.3) for a series of road shows in which NCSA director Larry Smarr, accompanied by EVL graduate student Joanna Mason, demonstrated Mosaic, contributing to its rapid adoption. EVL developed CAVEviewer, a version of the CAVE Simulator embedded in web pages, to provide full-motion video and to enhance documentation and dissemination of research projects via the Internet (Mason et al., 1994). An early use of Mosaic and CAVEviewer was for informational kiosks documenting the SIGGRAPH 1994 VROOM event, organized by EVL (Section 3.3).
As CAVEs proliferated worldwide, users wanted to (1) network CAVEs to remote supercomputers running simulations in order to access and visualize data, (2) remotely steer their simulations in real time, and (3) do CAVE-to-CAVE collaborations with remote participants working in the same virtual space. In the mid-1990s, network speeds were at best megabits per second (Mbps), so real-time interactive collaboration over distance was challenging.
This motivated CAVERNsoft, a networking library (Leigh et al., 1997), as well as avatar representations (polygonal, video, etc.) to enhance remote user communication (verbal and gestural). The ability to communicate through avatars gave users at one location a strong sense of the location of remote collaborators in the shared virtual space. EVL's interest in synchronous and asynchronous collaboration also motived the Lab's efforts in global network infrastructure (Section 3.2). Locating an ImmersaDesk near the CAVE helped EVL test networked collaborative worlds between these two systems before releasing code to others.
In 1997, EVL became part of the National Computational Science Alliance, an NCSA partnership program funded by NSF. EVL focused on collaboration, specifically high-modality tele-immersive applications; that is, distributed VR sessions among VR devices, such as the CAVE and ImmersaDesk. Mechdyne had already licensed and commercialized both the CAVE and the ImmersaDesk2, and several Alliance partners purchased the latter.
During this time, Donna Cox, Bob Patterson, and Stuart Levy of NCSA's Advanced Visualization Laboratory (AVL), who were major CAVE users, worked with EVL graduate student Marcus Thiebaux to create Virtual Director, a tool that enabled artists and scientists to use VR collaboratively, either locally or remotely, to navigate a virtual camera through volumetric datasets and choreograph and script camera movements for playback on film or video (EVL, 1992; Thiebaux, 1997). They used Virtual Director to facilitate research with several domain scientists (Lascara et al., 1999), and to create animations for several IMAX movies, the first being Cosmic Voyage, and NOVA television programs.
UIC campus researchers started complaining that access to EVL's CAVE was difficult; if it wasn't being used for campus demos, then it was being used for software development and therefore unstable. In response, the UIC College of Engineering funded a second CAVE in an adjacent building. The original CAVE was then devoted to development, and the new CAVE became a production-ready system for demos and contract work. Keeping devices demo ready while doing ongoing development remains a challenge to this day.
3.2 Advanced Cyberinfrastructure: Computers and Networking
In 1992, EVL played a major role in the SIGGRAPH 1992 conference held at McCormick Place in Chicago. Maxine Brown was conference chair. DeFanti was responsible for Showcase, a demonstration of leading-edge visualization research and applications that relied on advanced computing and networking—the first time SIGGRAPH and McCormick Place were networked! It was as part of Showcase that the CAVE (see Section 3.1.1) was introduced. DeFanti partnered with ANL, NCSA, and Ameritech, which ran the Network Access Point (NAP) in Chicago, to organize the event (see Section 3.3).
In 1993, Joe Mambretti, who then worked for the University of Chicago (UC) but subsequently moved to Northwestern University (NU), worked with UC, UIC/EVL, NU, NCSA, ANL, Fermi National Accelerator Laboratory (FNAL), and Ameritech to plan MREN (Metropolitan Research & Education Network), a persistent regional network interconnecting key research institutions in the greater Chicagoland area. It was implemented in 1994 with support from an NSF grant. MREN grew quickly and today links research institutions from seven upper-Midwest states (MREN, 2015). MREN, then located at the Ameritech NAP, interconnected with other regional networks via NSFnet, the NSF-funded National Research & Education Network (NREN), which was replaced in 1995 by NSF's very High-Speed Backbone Network Service (vBNS), the precursor to Internet2.
The vBNS, as well as other nascent federal NRENs from the US Department of Energy and the US Department of Defense, were used for the I-WAY (Information Wide Area Year) event at the IEEE/ACM Supercomputing (SC) 1995 conference, organized by ANL, NCSA, and EVL (see Section 3.3). I-WAY participants could leave their supercomputers, instrumentation, and data stores at home and remotely access them to move the results of large-scale simulations to the conference for display on visualization systems.
NSF took note of I-WAY, as their funding agency counterparts in foreign countries were also working with their NRENs to build advanced networks. These agencies wanted to interconnect their NRENs for the benefit of their respective researchers, providing access to computers, data stores, instrumentation, and collaborators globally. NSF encouraged EVL's involvement, liking the idea of visualization people managing and using networks for interactive visualization and/or streaming. If the networks weren't working properly, everyone would have immediately known, as the resulting imagery would have had jitter, lag, or dropout.
In 1997, to enable network traffic among US and international research institutions, NSF funded EVL to work with NU, ANL, NCSA, and Ameritech to establish STAR TAP (Science, Technology And Research Transit Access Point) in Chicago, a global Network Access Point (NAP) for long-term interconnection and interoperability of international NRENs (STAR TAP, 1997; NSF, 2003). STAR TAP leveraged the Ameritech NAP infrastructure where MREN and vBNS connected.
Next, to facilitate the interconnection and interoperability of NSF's vBNS with international NRENs for the benefit of US researchers, NSF funded international circuits. NSF awarded grants to several US researchers to pay for these links. In 1999, EVL received funding for Euro-Link and worked with a consortium of European and Middle East countries to co-fund circuits between the United States and network hubs in the Nordic countries (Denmark, Finland, Iceland, Norway and Sweden), France, the Netherlands, and Israel (STAR TAP, 1999). At that time, other universities (Florida International University and Indiana University) also received NSF awards to provide circuits to South America and Asia, respectively.
By 2001, as network speeds grew from 155–622 Mbps on SONET (Synchronous Optical Network) to 1–10 Gbps over optical channels, EVL worked with Joe Mambretti of NU and Linda Winkler of ANL to create the StarLight International/National Communications Exchange Facility to prototype a 10-Gbps Ethernet-over-optical fiber path infrastructure (STAR TAP, 2001). They used advanced technologies, including optical transport systems from telecommunications equipment company Nortel in Canada that replaced ATM (Asynchronous Transfer Mode), a network transfer mode to switch and transmit data, with GigaMAN (Gigabit Metro Area Network) and GigaMON (Gigabit Metro Optical Network) Ethernet services, with state-wide, regional, national, and international extensions. Given the research community's demand for faster networks to move big data, StarLight became a production facility within a few months!
EVL's interest in advanced networking technologies and techniques led to a long-term collaboration with NTT (Nippon Telegraph and Telephone) Networking Innovations Laboratories in Japan. The N*VECTOR and then ON*VECTOR (Optical Networked Virtual Environments for Collaborative Trans-Oceanic Research) research projects (1997–2019) focused on the utilization, scheduling, scalability, speeds, and reliability of emerging high-performance networks; EVL's SAGE software (see Section 4.2) was eventually used to stream ultra-high-definition visualizations and animations (4K, 8K, and more). In addition to NTT and EVL, other partners included Calit2-Qualcomm Institute, Keio University, and University of Tokyo, and was managed by Pacific Interface, Inc. (See Section 4.4 for how this collaboration led to the creation of the CineGrid organization.)
3.3 Community Engagement
The CAVE premiered at SIGGRAPH 1992 as part of the Showcase event, organized by DeFanti, EVL, and partners NCSA, ANL, and Mesa Graphics (Section 3.2) (EVL, 2022a). Showcase uniquely demonstrated science in action with over 45 projects using either the CAVE or workstations networked to local and remote supercomputers, instrumentation, and data stores. Given the CAVE's impact, organizers of the SC 1992 and 1993 conferences invited EVL to introduce the CAVE to their attendees as well.
EVL has continued to both organize events and participate in events at SIGGRAPH and SC as well as at other conferences and GLIF workshops (see Sections 4.3 and 4.4). These venues presented opportunities to promote EVL's new technologies and achievements to the research communities with whom EVL was involved (VR, scientific visualization, advanced networking), and to attract both future collaborators and future users of its inventions.
In 1994, EVL organized the SIGGRAPH 1994 VROOM (Virtual Reality Room) event (EVL, 2004a). VROOM showcased over 40 VR projects using three CAVEs and several BOOMs (Binocular Omni-Orientation Monitors) (Wikipedia, 2019), involving more than 200 researchers. Unfortunately, given that Orlando's convention center was under construction, it was not possible to network offsite, so EVL had to arrange transport to bring data stores, workstations, small supercomputers, and scientific instrumentation onsite.
The goal of the SC 1995 I-WAY event, mentioned in Section 3.2, was to leave the supercomputers at home, access data over advanced networks, and visualize results at the conference, in CAVEs, on ImmersaDesks, and on a large-screen, projection-based PowerWall (DeFanti et al., 1996; EVL, 2022b). I-WAY demonstrations interconnected 12 network testbeds, 17 supercomputer centers, five VR research sites, and over 60 application groups to create an extremely diverse, wide-area environment in which to explore advanced applications and the future of advanced scientific and engineering computing.
In 1998, after NSF funded STAR TAP and international network circuits, NSF asked EVL if we could identify who was using the networks and for what purposes. For EVL, the best way to find out was to organize an event at a major conference and invite real-time demonstrations. In collaboration with Indiana University, EVL organized the first iGrid (International Grid) event, then a small research booth at the SC 1998 conference that showcased global community networking and visualization applications (EVL, 1998a; Brown, 1998; Brown et al., 1999). One EVL demonstration used CAVERNsoft (Section 3.1.2) to connect VR displays in Singapore, Australia, Japan, and the United States (Leigh, 1998).
In 1998, EVL began working with Bryan Carter of Central Missouri State University, the University of Missouri, University of Arizona, and UIC's English and History departments, to create Virtual Harlem, a VR collaborative learning environment representing Harlem, New York, as it existed in the 1920s and 1930s. Carter conceived and initiated the project to complement undergraduate Harlem Renaissance courses, enabling researchers and students to study this important period in African American history (Park et al., 2001). EVL included lessons learned from its V-Mail application (see Section 3.1.2) and recorded experts, like Carter, talking and gesturing in Virtual Harlem, to engage viewers in the experience even when the experts were not physically present. Carter received an award from the MOBE (Marketing Opportunities for Black Entrepreneurs) group at their annual conference in 2000 for Virtual Harlem.
4 Scientific Accomplishments: The 2000s
4.1 VR and Visualization Systems and Applications
4.1.1 Autostereo Hardware, Software, and Applications
While the CAVE gave users a unique immersive experience, there were clear technological limitations. Users were encumbered by wearing active stereo glasses. Room lighting was dimmed to better see the projected imagery, but this made it more difficult for collaborators to interact with one another and take notes. And, while users could be immersed in their data, the intensity and resolution of the early projectors and environmental lighting constraints impacted the ability to do quality visual science. It was like working by moonlight, although the color got much better with brighter stereo-capable projectors over the next 15 years.
4.1.2 Visualization and VR Hardware, Software, and Applications
EVL started working with the geoscience community, in particular University of Minnesota and University of Michigan, to build a low-cost, non-tracked, passive-stereo system so audiences anywhere could easily view stereo content. While such a low-cost system was applicable to many applications, the geoscience community immediately saw its potential to support Earth Science curricula and geological fieldwork (to document 3D rock formations), hence the name GeoWall (Johnson et al., 2006). By 2001, a $10,000 system (two “Powerpoint” projectors, passive stereo filters, and a PC) was used in classrooms by 33% of US Earth Science undergraduate students. Based on this success, the geoscience community next asked EVL for help viewing high-resolution images.
To make space for LambdaVision in 2004, EVL dismantled the original CAVE in its main lab. UIC's second CAVE continued to be used until decommissioned in 2006 to make way for a motion capture studio.
In late 2008, EVL integrated the latest new technologies—larger, higher-resolution, commodity, flat panels, and IR tracking—to create the multi-user, multi-touch systems TacTile (Kahler et al., 2009) and OmegaDesk (Febretti et al., 2011). TacTile was like a big smart phone but multi-touch. OmegaDesk used motion sensor technology and multi-touch to enable users to seamlessly interact and view hybrid 2D and 3D content.
OmegaLib, a C++/Python middleware library, was initially developed to support OmegaDesk (Febretti et al., 2014). It was expanded to run all EVL's visualization systems, replacing CAVELib as EVL's standard development library, and was interfaced to SAGE.
In 2006, EVL received NSF funding for Project Lifelike, a joint project between EVL and the University of Central Florida (UCF) to develop and evaluate lifelike, natural computer interfaces as portals to intelligent systems (Gonzalez et al., 2013). An NSF program officer was retiring and asked to document the details of a program he had created decades earlier; however, rather than have him write a manual, NSF wanted a lifelike avatar of him to interact with users and answer questions on specific topics in spoken natural language. Using lessons learned from AccessBot, described earlier, avatars were designed for large screens, so were life-size as well as lifelike.
UCF focused on natural language processing and EVL focused on creating a realistic, interactive avatar and system evaluation. Project Lifelike was featured on Discovery Science channel in “Future of: Immortal Avatars” in 2009 and NOVA's scienceNOW (NOVA, 2011). In 2014, Project Lifelike was used to create an avatar of Alan Turing for a museum show (see Section 5.4).
4.1.3 3D Camera
In 2009, DeFanti, who was then at Calit2-Qualcomm Institute, and Sandin, at EVL, collaborated with Richard Ainsworth to develop CAVEcam (see Figure 44), an automated, spherical, still photography camera system that created panoramic 3D immersive experiences for viewing in CAVE-like displays (Ainsworth et al., 2011). Researchers wanted to bring high-resolution 3D images of the real world into VR, not just synthetic worlds, just as geoscientists needed the GeoWall stereo projection system to best teach students about rock formations. CAVEcam took 360 × 180-degree panoramas, 72 pictures per eye, with each image 10 megapixels in resolution, which were then assembled into 500 megapixel-per-eye stereo images using a stitching program called PTGui Pro.
4.2 SAGE Collaboration and Visualization Software
From 2002–2009, NSF funded Calit2-Qualcomm Institute and EVL to lead the OptIPuter project, so named for its use of OPTical networking, Internet Protocol, and compUTER technologies (Smarr et al., 2003, 2009; Pieper et al., 2009). Larry Smarr, principal investigator, was previously NCSA director and an EVL partner, and when he moved to UC San Diego in 2000, he founded and became director of Calit2 (now Calit2-Qualcomm Institute) and continued EVL collaborations.
OptIPuter envisioned how researchers would do computing in a world with unlimited network bandwidth. The OptIPuter proposed multi-Gbps networks as the backplane of a planetary-scale computer whose distributed components—compute clusters, data storage systems, scientific instruments, visualization systems—would enable scientists to compute, access, and view ultra-high-resolution visualizations on large-scale display walls. Such data superhighways were needed because the shared Internet and World Wide Web were engineered to handle megabyte-sized objects, whereas scientific instruments generated gigabyte- to terabyte-sized datasets. OptIPuter aimed to make interactive access to remote gigabyte visualization data objects as easy as the Web made access to remote lower-resolution images.
By 2004, after observing how OptIPuter's geoscience researchers from the UCSD Scripps Institution of Oceanography (SIO) and neuroscience researchers from the UCSD National Center for Microscopy and Imaging Research (NCMIR) worked, EVL began developing SAGE (Scalable Adaptive Graphics Environment) as the OptIPuter's visualization middleware (Renambot et al., 2009, 2016).
The OptIPuter's distributed network testbed was part of an evolving 10-Gbps regional, national, and international Research & Education Network infrastructure, which EVL helped architect in the 1990s with STAR TAP and then with StarLight (see Section 3.2). LambdaVision2 had a 10-Gbps connection from EVL to StarLight, where it interconnected with this evolving global infrastructure, and since SAGE was being developed on LambdaVision2, SAGE was designed to scale up rendering capacity and was a prime candidate for aggressively evaluating high-performance networks.
The SAGE Visualcasting service enabled multi-point distance collaboration, whereby high-resolution visualizations and high-definition video streams (visual and audio) were replicated and sent to multiple sites, each with variable-sized tiled displays (Jeong et al., 2010). Visualcasting enabled researchers to simultaneously communicate with each other and share visualizations and videos, and was an important advancement over traditional network multicasting that required network engineers to implement on a case-by-case basis.
SAGE Visualcasting was used by the EVL/NTT Network Innovations Laboratories collaboration to stream ultra-high-definition visualizations and animations (4K, 8K, and more) to scalable tiled displays (see Section 3.2).
Since its inception, SAGE continues to evolve and controls a variety of high-resolution displays at hundreds of sites around the world. SAGE developers and users regularly use it for meetings and classes, and their feedback helps harden the software and create new features.
4.3 Advanced Cyberinfrastructure: Computers and Networking
By 2004, Euro-Link (Section 3.2) grew from a 45-Mbps circuit to multiple 10-Gbps links. To continue this effort, EVL received NSF funding for TransLight in 2004 (DeFanti et al., 2003; STAR TAP, 2004), where StarLight and NetherLight (an open network exchange, like StarLight, hosted by SURFnet, the Dutch Research & Education Network) would groom aggregate traffic in North America and Europe, respectively, and then provide a network across the Atlantic Ocean connecting the two locations. Again, other universities also received NSF funding to connect the United States to South America and Asia.
In 2010, EVL received the NSF TransLight/StarLight award to focus on communication services and advanced applications, and in 2015, partnered with NU on StarLight Software Defined Networking Exchange (SDX) to enhance StarLight with advanced, diverse, reliable, persistent, and secure networking services (STAR TAP, 2015).
4.4 Community Engagement
iGrid events (see Section 3.3) started in 1998 and continued in 2000, 2002, and 2005, as community-driven biennial events coordinated with the GLIF organization (see Section 4.3) to accelerate the use of multiple 10-Gbps international and national networks, to advance scientific research, and to educate decision makers, academics, and industry researchers on the benefits of hybrid networks (Brown, 1998, 2000, 2002, 2005). iGrid events all featured real-time application demonstrations from numerous international participants, as well as symposiums, panels, and master classes on applications, middleware, and underlying cyberinfrastructure.
One iGrid 2005 demo, “International Real-time Streaming of 4K Digital Cinema” (Shimizu et al., 2006), was so successful that the organizers wanted to continue and expand their collaboration with other interested parties, including networking and visualization researchers as well as movie production companies. This led to the creation of the CineGrid organization, a global effort to support research and education for 21st-century filmmaking by applying advanced networks, middleware, and application tools to the production, distribution, archiving, and display of new digital media (Smarr et al., 2007). CineGrid, a nonprofit organization, was created by Pacific Interface, Inc., in collaboration with Calit2-Qualcomm Institute, EVL, NTT Network Innovations Laboratories, Keio University, and the Digital Cinema Technology Forum and Digital Cinema Consortium of Japan.
In 2007, astronomers at Chicago's Adler Planetarium worked with EVL to create interactive exhibits of flyovers of Earth's moon and Mars, which led to the inception of Adler's Space Visualization Laboratory. This working laboratory, staffed with astronomers, technologists, artists, and educators, gave museum visitors a firsthand look at new interactive and immersive visualizations and an opportunity to hear presentations featuring these visualizations. With EVL's help, a similar space was built at the ‘Imiloa Astronomy Center in Hawaii in 2016.
In 2009, Adler Planetarium worked with EVL to build Moon Wall, a 4 × 4 tiled-display wall, to display recent images from NASA's Lunar Reconnaissance Orbiter mission to the moon. Visitors could virtually fly over the surface of the moon using a joystick. It also showed EVL's 3D rendering of Mars, developed with data from NASA and the European Space Agency.
5 Scientific Accomplishments: The 2010s
5.1 VR and Visualization Systems and Applications
5.1.1 CAVE2 and Applications
The CAVE2 Hybrid Reality Environment was introduced in October 2012. Lessons learned from developing the CAVE, LambdaVision, Cyber-Commons, and Varrier technologies were incorporated in the development of CAVE2 (Febretti et al., 2013; Leigh et al., 2013; Reda et al., 2013), a room-scale system that integrated 3D immersive simulation and 2D information analysis. To make room for CAVE2 in EVL's main lab, LambdaVision was dismantled and given to the UIC ACM Computer Science student chapter for use in their space.
CAVE2, with its 22-foot diameter and circular (320-degree) design, initially used 72 passive stereo displays (18 columns × 4 rows) driven by a 36-node computer cluster. It could be run as a 37-megapixel 3D display or a 74-megapixel 2D display, making it a hybrid device that could easily adapt based on usage. In 3D, CAVE2 used passive stereo with camera tracking, not the expensive, bulkier, active-stereo glasses and tethered magnetic tracking used in the CAVE system. The familiar passive stereo glasses used by movie theaters for 3D theatrical releases were a welcome shift.
A recessed back wall was later added, increasing the number of displays to 86, to give the illusion of a complete 360-degree surround environment or to act as a second display surface for ancillary content. Given hardware acceleration, the number of computers was reduced to eight.
New software was written to drive the CAVE2 in 3D and 2D modes. For 2D work, EVL used SAGE2 (see Section 5.2). For 3D work, new libraries were created: Electro, written in Lua (EVL, 2004e), and OmegaLib, written in C++/Python (Febretti et al., 2014). These scriptable, multi-platform tools enabled users to quickly get their data and applications into CAVE2. Additional CAVE2 libraries were developed to control graphics, sound, user inputs, and tracking.
By the middle of the decade, as game engines such as Unity and Unreal Engine became more powerful and more ubiquitous, EVL moved to the Unity platform. The CAVE2 Unity Library and the Omicron plug-in (Nishimoto, 2020) let students write applications and deploy them to CAVE2, head-mounted displays (HMDs), laptops, and/or smart phones. A Unity CAVE2 Simulator, along with a set of simple demo examples, let users create and test CAVE2 applications without needing access to the CAVE2 system. These libraries became extensively used by CS and Design students enrolled in the Creative Coding class to create fully functional CAVE2 experiences within a single semester without any prior Unity experience.
CAVE2 was designed to support research groups (faculty, students, collaborators) working for long periods throughout the day within the space. A great deal of care was taken to make the space comfortable with controlled lighting and sound, and to make the space reconfigurable with portable tables and chairs. CAVE2 continues to be regularly used in 3D VR mode, 2D project room mode, or 3D/2D hybrid mode, for research and class work.
5.1.2 Continuum
5.1.3 VR and AR Headsets
In the late 2010s, a wave of less expensive VR and Augmented Reality (AR) headsets brought VR back into the spotlight and reignited interest in the field. EVL purchased several different HMDs and provided a dedicated space in the main lab for use in coursework and research. The current generation of headsets is still too bulky and heavy to wear for long periods, but inevitably, as with most technologies, HMDs will evolve into lighter and less constraining devices in support of longer use.
5.1.4 3D Cameras
5.2 SAGE Collaboration and Visualization Software
In 2013, EVL received NSF funding to develop SAGE2 (Scalable Amplified Group Environment) (Marrinan et al., 2014; Renambot et al., 2016). SAGE2, the second-generation SAGE, was re-architected to take advantage of advances in web graphics (doing GPU accelerated graphics rendering within web browsers) and more robust PCs (where a large display could be driven by a single PC rather than a compute cluster, making it easier for scientists to acquire, use, and maintain display walls). Jason Leigh, inventor of SAGE, left UIC in 2014 for University of Hawaii at Manoa/LAVA and continues to collaborate with EVL on SAGE2, and now SAGE3 (see Section 6.1), development.
Whereas SAGE was written in C++, SAGE2 moved to a fully JavaScript code base, where all graphics were rendered in the browser. By 2018, there were over 4,000 SAGE2 users in over 800 institutions in 18 countries worldwide, in a broad range of disciplines such as: archaeology, architecture, art, atmospheric science, biology, chemistry, civil engineering, communications, computer science, design, education, geoscience, health, library science, mathematics, medical, meteorology, network engineering, neuroscience, physics, psychology, and statistics (Leigh et al., 2019; Belcaid et al., 2023).
5.3 Advanced Cyberinfrastructure: Computers and Networking
EVL continually investigates new computer technologies and has a strong working knowledge of GPUs, given all the computers it has purchased over the years to drive its visualization and VR displays (everything from mini-supercomputers to clusters to PCs). EVL's computers interface with both UIC's academic network and its research network.
In 2017, EVL was introduced to a next-generation computer architecture called composable infrastructure (Lowe, 2016) and immediately saw how it was applicable to UIC's Engineering faculty, who had GPU-hungry applications in visualization, image processing, artificial intelligence (AI)/machine learning, and the integration of AI with visualization (i.e., extracting data of interest for visualization when the data is too large to be output directly). In 2018, EVL received NSF funding for COMPaaS DLV (COMposable Platform as a Service Instrument for Deep Learning & Visualization), a computer system in which the computer's components—CPUs, several types of GPUs, storage, and networking, all interconnected with a high-bandwidth configurable fabric—formed a fluid pool of resources, such that different applications with different workflows could be run simultaneously, with each configuring the resources it required almost instantaneously, at any time. By the end of the NSF award in 2022, COMPaaS was being used by over 40 UIC College of Engineering faculty, students, and collaborators, and continues to attract new faculty and students as well as domain scientists across campus (Brown et al., 2019; Chen et al., 2022; Long et al., 2022).
COMPaaS facilitates new breakthroughs in theoretical research and challenging long-standing paradigms in visual computing. For example, several EVL projects helped prove that in interdisciplinary collaborations, Activity Centered Design leads to higher success rates than Human Centered Design (Marai, 2018). As another example, the Continuum large display helped document the fact that domain scientists prefer a Details-first Design approach to an Overview-first Design approach; for example, computational fluid dynamics collaborators were physically walking up to the Details view on EVL's large display at the start of each analysis process (Luciani et al., 2019). Other EVL works (Wentzel et al., 2020; Floricel et al., 2022) have since been included in graduate-level curricula and textbooks (Preim et al., 2023; BioMedVis, 2022).
High-speed networking continued to be of significant benefit to researchers—to collaborate over distance, share data, and have real-time access to remote computational resources. With NSF funding, UIC upgraded its campus’s research network to 100 Gbps, a 10-fold increase over the previous decade, and deployed 100 Gbps into EVL where it connects to its many devices, including CAVE2.
5.4 Community Engagement
CAVEcam, the spherical panoramic, 3D-immersive still-photography camera, was used in 2011 by Tom DeFanti and colleagues Greg Wickham and Adel Saad to capture spectacular 3D surround images of Egypt's temples at Luxor, a major world heritage site, and bring these splendors back to the United States for viewing in Calit2-Qualcomm Institute's SunCAVE and EVL's CAVE2 (Ainsworth et al., 2016). Ainsworth did the stitching for these images and dozens more CAVEcam images.
In 2014, the Orlando Science Center approached EVL and University of Central Florida about using Project Lifelike (see Section 4.1.2) to create an interactive exhibit featuring an avatar of Alan Turing, considered the father of modern computer science, to motivate kids to study CS or engineering (EVL, 2014). Turing's avatar interacted with attendees, teaching them bits of CS history and assisting them make their own avatars using a touch screen. EVL also worked with Adler Planetarium on an interactive avatar of former astronaut Jim Lovell.
In 2014, Daria Tsoupikova, EVL/design faculty, collaborated with University of Bergen (Norway) and Temple University on Hearts and Minds: The Interrogations Project, an interactive VR narrative performance in EVL's CAVE2 (Tsoupikova et al., 2017). It visualizes stories of violence and post-traumatic stress experienced by ordinary American soldiers during the American-led counterinsurgency and counter-terrorism campaigns in Iraq in the years after September 11, 2001. This artwork was subsequently moved to portable displays, domes, projection surfaces, and HMDs to reach new audiences across educational, arts, and public arenas.
6 Scientific Accomplishments: The 2020s
6.1 SAGE Collaboration and Visualization Software
In April 2020, LAVA, EVL, and Virginia Tech received NSF funding to develop SAGE3 (Smart Amplified Group Environment). Shortly thereafter, COVID struck. The development team recognized the sociological changes brought about by the pandemic, notably shifting the workplace to a hybrid office/home environment that required not just video-teleconferencing but data-driven tools for large, distributed groups. The team also knew that there had been several major technological changes since SAGE2 was designed in 2013; for example, web browsers and languages were maturing; scientific communities were relying on Jupyter Notebooks; and science workloads were incorporating AI algorithms and programmable infrastructure. Therefore, it was decided to re-architect SAGE3 from the ground up.
SAGE3 leverages robust JavaScript frameworks and cloud services to reimplement and improve upon SAGE2’s capabilities and adds provisions to run and manage data science and AI services. Furthermore, SAGE3 enables users to both work at home on their desktop computers/laptops and in their research laboratories on tiled display walls, while having the content seamlessly shared across infinitely sizable workspaces and infinite numbers of workspaces (Kirshenbaum et al., 2020, 2021; Harden et al., 2022; Belcaid et al., 2023).
6.2 Advanced Cyberinfrastructure: Computers and Networking
By 2021, the COMPaaS computer system was enhanced with a public-facing JupyterHub server, as Jupyter had become a common software frontend for most machine learning work. COMPaaS was also enhanced with a current generation PCI-express fabric and expansion chassis, enabling researchers to add their own accelerators or compute nodes to the pool of available resources. COMPaaS is available to users as bare metal for systems and network research applications but primarily supports containerized GPU applications using Kubernetes orchestration (Kubernetes, 2023). Users can deploy domain-specific or customized Docker containers on COMPaaS GPUs. Kubernetes orchestration manages the resource requisitions, network ingress, data storage, and container access for COMPaaS users. Jupyter Notebook implementations provide users with workflows that are reusable and easy to share among researchers. The skills learned developing codes on COMPaaS enable its users to build scalable applications faster, with knowledge that transfers directly to other research problems.
6.3 Community Engagement
7 Major Outcomes
7.1 Student Successes
EVL undergraduate and graduate students have won multiple Best Paper and Honorable Research Awards, NSF Graduate Research Fellowship Awards and nominations, UIC Chancellor's Undergraduate Research Awards, UIC Engineering Dean's Scholar Fellowship Award, Doctoral Colloquium selections at premier conferences, and Computing Research Association fellowships.
EVL students receive excellent jobs upon graduation, putting their skills to use in: (1) special effect houses (PDI/Dreamworks, Rhythm & Hues, Digital Domain, MetroLight Studios, Cinesite, Disney Animation, Pixar, Weta Digital); (2) the gaming industry (Amazon Games, High Voltage, Valkyrie Studios, Midway, CodeWalla); (3) companies (Amazon, Apple, Google, Time Warner TBS, General Motors, Microsoft, Exxon Mobil, Qualcomm, Motorola, Mechdyne); (4) research labs (ANL, NASA Goddard, Northrup Grumman, Qualcomm, Thompson Scientific); and (5) universities (USC Cinema, San Francisco State, University at Buffalo, Indiana University, University of Massachusetts, University of Chicago).
A common phrase upon graduation is “No one ever leaves EVL.” Fostering a sense of family, community, and inclusion has been an important part of EVL's overall success. EVL makes every effort to stay in touch with its graduates. EVL attends many major conferences, such as SIGGRAPH, SC, and IEEE Vis, where former students reunite. Alumni frequently visit EVL, give presentations about their careers, and learn about EVL's latest innovations—and some adopt them for their jobs. Maintaining these relationships provides for a close-knit network and support system for all graduates.
7.2 Industry Collaborations, Commercialization, Patents, and Licenses
EVL, as a technology enabler and disruptor, maintains a multimillion-dollar research facility. EVL has a track record of working with industry partners to help them be more competitive in the global marketplace, either by working with them to commercialize EVL inventions or by educating them on how their commercial products can be re-engineered to meet the needs of the academic research and education market.
EVL's software (including SAGE, Electro, Omegalib, CAVE2 Simulator, etc.) is made available for download from GitHub as open source for non-commercial use. For commercial use, UIC's Office of Technology Management brokers software and hardware licensing agreements with interested companies, such as Mechdyne, Vadiza, Comtex, and Multivis. EVL has had a longstanding licensing relationship with Mechdyne, who commercialized the CAVE in the 1990s and the CAVE2 in 2013.
Over the years, select EVL faculty and students have received patents for their inventions. When developing the CAVE2, EVL worked with display manufacturer Planar Systems and in 2017 received U.S. Patent No. US 9,800,862 B2 for “System and Methods for Visualizing Information.” When developing Varrier, EVL received U.S. Patent No. 8248462B2 for the “Dynamic Parallax Barrier Autosteroscopic Display System” in 2012. EVL and art(n) Laboratory, when working on PHSColograms, received Patent No. 5,113,213 in 1992 and Patent No. RE35029 in 1995 for “Computer-generated autostereography method and apparatus.”
8 Lessons Learned
Here are some reflections from EVL interim director Andy Johnson, who has been at EVL for 30 years. These lessons learned have been substantiated throughout this paper.
Keep your friends close and your hardware (toys) closer. Having everyone co-located with the equipment they use makes so many things simpler.
Flexibility is important. Having a varied, but overlapping, research portfolio helps attract new funding, faculty members, and students, as research topics evolve over time.
Use the tools you develop regularly to work out the bugs and keep the tools current.
Family is important and EVL is family. Faculty, staff, and students all have desks and access to meeting rooms and a common kitchen area with an espresso machine and tea kettle! Having faculty and students casually talk with one another at different stages in their careers and learn how they are coping with issues gives a strong sense of continuity and a path forward.
Embrace evolution! Given the rapid pace with which technology ages and students and staff move on, plan to update or replace computers, display screens, and software on a regular basis, prioritizing needs and available funding. For example, the lifespan of a large display at EVL is typically 10 years before newer technology is available to support the Lab's evolving research focus.
Create tools and techniques that help people, in collaboration with the people being helped.
Being affiliated with at least one academic department is important. EVL's affiliation with the CS department provides a steady stream of new students and access to other faculty members who may have new problems to solve or the knowledge to help solve new problems.
Pursue your passion projects, as they can lead to new research directions. Opportunities come from the strangest projects!
The research lab must provide researchers and students with better infrastructure than they have at home, or why expect them to come into the lab to work?
Develop high-quality simulators for new hardware so users can code anywhere, and then come into the laboratory to test, especially when the resource, like the CAVE or CAVE2, is a unique system and not easily shared among many users at the same time.
Diversity across multiple dimensions is essential. EVL has worked with people of all ages, genders, races, disabilities, and domains, from the arts and sciences to the public, and they all bring interesting perspectives to EVL's research efforts and opportunities to take it in new directions. Computer graphics/visualization appeals to people with a diversity of talents, from math and science to the arts (e.g., art, design, music, dance, sound, theater, film, etc.).
Understand the importance of showtime. The tech must work! Use conference demonstrations, museum exhibits, lab tours, and end-of-semester project demonstrations as reasons to harden the hardware and software so it can be used by others. Also, create expectations that these demonstrations should be flawless, or at least seem that way to visitors. Then use these demonstrations to meet new people, get feedback, and discover new problems to work on.
Get your work out of the lab and into the hands of actual users (scientists, students, museum goers, conference attendees).
Live the dream. If you don't have the future you were promised in the sci-fi movies you watched or books you read as a child, then build that future with your friends!
9 Trends and Plans for the Future
The world needs immersive, room-scale technologies for data analytics. People are now accustomed to larger and larger flat screen TVs and wider desktop monitors with high fidelity to support their work. New display technologies (e.g., projection-based active stereo and OLED wall active stereo) to high-definition display walls can aid researchers collaborating on complex problems—either studying one large image or visualizing multiple representations of the data with supporting information.
Thirty years of experience working with scientists and their data in a variety of domains has shown that room-scale stereoscopic displays, HMDs, smart phones, laptops, and tablets should all be integrated into larger, collaborative spaces, as it is still the only way to truly investigate data in depth. Integrating AR and VR into spaces can release analysts from being tethered to their laptops and phones, enabling them to privately view their personal data while sharing high-resolution data in multiple formats in the larger collaborative space. Further, integrating these modalities with voice and AI assistance will provide interesting research opportunities for immersive analytics spaces where the room is an active participant in the exploration. While we wait for wearable technology to improve, immersive visualization rooms outfitted with state-of-the-art tracking let us prototype that future.
Though original CAVE projects involved one to a few domain scientists, the trend over the years has been toward projects with larger and larger groups, many featuring complementary expertise (Marai et al., 2019). Due to their cost and relative rarity, high-resolution immersive analytics spaces are increasingly used by multiple groups under either a time-share or a space-share model. Under space-share models, modern spaces should accommodate seamlessly large group meetings (30–50 people) and/or multiple, simultaneous, small group meetings (5–10 people). Immersive analytics tasks require the ability to link representations together, to quickly annotate, and to brainstorm as a group, as though standing together at a whiteboard. Under time-share models, users should be able to quickly add new information to the conversation, save the state of a session and bring it back the next day, or next year, and copy information from the public displays back to their personal displays.
In addition to building on its current research, EVL wants to aggressively pursue emerging opportunities. Big Data keeps getting bigger; for example, higher-resolution scientific instrumentation, Internet of Things sensor data (edge computing), drone data, 360-degree high-resolution photography and video, and autonomous vehicle data. R&D globalization makes the world keep getting smaller, so techniques to capture, store, move, process, and analyze information continue to grow. EVL sees many opportunities and is delighted that recent CS faculty hires with knowledge of visualization, computer vision, high-performance computing, and smart cities wish to be affiliated with EVL.
UIC's CS department will move into a new state-of-the-art building, to open in 2024. EVL recently received NSF funding to design and build DOCC (Data Observation and Computation Collaboratory), to be readily accessible to all CS faculty and students. Also, EVL's Continuum room will be replicated in the new building and used as a classroom for a variety of undergraduate and graduate courses.
UIC is one of the nation's most diverse public research universities and is a federally designated Minority-Serving Institution. And more students than ever before are enrolling in CS. Increased diversity brings new opportunities to apply lessons learned over the last 50 years to new fields of interest, with new ideas coming from these new students. On the flip side, more students mean larger class sizes, making it challenging to appropriately scale spaces like Continuum and software like SAGE. Of course, more students mean more faculty hires, presenting new opportunities for collaboration.
Fifty years of the Circle Graphics Habitat and the EVL is quite a success story. Onward to the next 50 years!
Acknowledgments
Over the decades, EVL has received major funding from federal agencies (NSF, DOE, DOD, NIH, NASA, DOT, NEA), industry, and nonprofits (museums and foundations). EVL has also received major funding from Argonne National Laboratory, the State of Illinois, UIC, and Pacific Interface on behalf of NTT Network Innovation Laboratories in Japan. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the funding agencies and companies.
SAGE3 and DOCC, mentioned in this paper, currently receive major NSF funding, notably award 2320261 for DOCC, and awards 2004014, 2003800, and 2003387 for SAGE3.
SAGE, SAGE2, SAGE3, CAVE, CAVE2, ImmersaDesk, and ImmersaDesk2 are trademarks of the University of Illinois Board of Trustees.
PHSCologram is a trademark of (art)n Laboratory, Chicago, Illinois.
The copyright for all photos and diagrams in this paper is held by the Electronic Visualization Laboratory, University of Illinois Chicago.