"From Immersion to Acquisition: An Overview of Virtual Reality for Time-Based Media Conservators" was an informative presentation given at the 2019 Annual Meeting by Mark Hellar, a technology consultant for cultural institutions and owner of Hellar Studios LLC, and Savannah Campbell, Media Preservation Specialist at the Whitney Museum of American Art. Mark and Savannah have both have done extensive applied research on virtual reality (VR). Mark has focused on web-based VR platforms and their potential application in a museum context, while Savannah's master's thesis for NYU's Moving Image Archiving and Preservation program examined the challenges of archiving and accessing VR. The resources that they shared will be a touchstone for time-based media conservators, collection managers, and technical staff as artists increasingly use VR in their practices.
Starting with an overview of the hardware, software, and content types used in VR artworks, Savannah talked about the different ways that VR can be displayed. There are many types of the familiar VR headset, or head-mounted display (HMD). These were broken down into four broad categories based on dependencies. Mobile VR requires a cell phone, Standalone VR is a self-contained device, VR Systems require a computer for high-resolution, immersive experiences, and Console VR that work with video game consoles with as Playstation and Nintendo Switch. HMDs can be further categorized according to their passive or interactive and immersive features, which depend on "degrees of freedom". 3 degrees of freedom allows a viewer to look around, for example at 360 degree video. 6 degrees of freedom uses external sensors to gauge a viewer's position in space, allowing them to navigate the virtual space by moving around in the real world.
Interactive VR content that is rendered in real time (e.g., is not predetermined content played back in an immersive way) is typically run from executable files stored on a device, whether it is a Standalone VR HMD or a VR System connected to a computer. When considering this type of VR for exhibition or acquisition, one should be aware that it probably includes packaged assets such as 3D models, audio, and video files, and might require external dependencies like a particular software game engine, graphics libraries, a computer with minimum CPU and GPU capabilities, and peripherals like controllers or tracking sensors that are compatible with the VR system.
Expanding on the packaged assets and game engine, Mark and Savannah pointed out that acquiring VR content should include any proprietary software in the version that the VR project was made, along with project files containing uncompiled source code. Unity is a very popular proprietary program for VR creation, and Blender is an open source 3D modeling software with a VR plug-in. It's also important to know what programming language the project was written in. Unity uses C#, its Unreal VR engine uses C++, and Blender uses Python. The compiled executable files that are run on the Standalone VR device or host computer are typically an EXE (.exe) or APK (.apk) file.
A further consideration is interoperability between VR hardware and content. A recent development is OpenXR, a free cross-platform standard that detects a headset's features and conforms the VR content to that device. However, earlier VR projects may not use OpenXR and it is important to know which hardware is compatible.
Another standard for VR content is WebXR, which supports development of VR and AR experiences on the web, rather than on hardware systems. The programming is done in web native technologies like JavaScript, HTML Canvas, and WebGL, and the web browser executes the code. Libraries such as Babylon, A-Frame, and three.js are easily incorporated into HTML code for quick and easy development, aided by optimized file formats such as glTF (GL Transmission Format), described as the "JPEG of 3D". Mark shared a documentation project at SFMOMA that used the source files from a 3D-printed architectural model to create a web-based VR scene. Such a resource can provide access to objects in the museum's collection for examination by curatorial and conservation staff, especially when the physical iteration is replaceable or meant to degrade over time.
360 degree video uses codecs and containers similar to regular video files, but there are some key differences that are good to know for assessing and displaying them. 360 video is natively spherical, but is stored flat in a video file. Just like a map of the Earth, translating a spherical form to a plane requires some method of projection mapping that creates distortion when it is viewed flat. One method of projection mapping is Equirectangular, which is like unrolling the sphere and flattening it out. For video, this can also create unequal pixel distribution with areas of higher and lower image quality. Another common method is a cube map, where the sphere is transformed into a cube then the square surfaces are rearranged into a rectangle. The corners of the squares are somewhat distorted, but overall there is less impact on image quality. A media player that can handle 360 video decodes these flat mapping techniques back into a sphere for playback, so the display appears undistorted to the viewer.