StreamBED VR Habitat Training Design & Evaluation
My dissertation asks whether qualitative stream assessment skills can be taught to potential citizen scientists through virtual reality (VR). VR affords users with a sense of telepresence, an illusion of actually "being" in a virtual world. Previous research suggests that VR can help learners develop spatial knowledge and help train environmental judgment skills. The goal of this work was consider whether VR can be employed to train citizen scientists to make qualitative stream assessments the way experts do, giving them practice rating virtual stream habitats.
My research asks: (1) How can technology effectively train scientific qualitative judgment tasks? (2) Do multisensory cues help learners make habitat observations? (3) How to scaffold learning to support the expert process?
To undertake this series of questions, I first gathered requirements by through expert interviews and in-person training with expert monitors. Then, I designed an initial prototype of the Streambed VR tool through an iterative process, and conducted an initial evaluation. In response to my initial evaluation findings, i went through a divergent redesign process, that considered different aspects of design and training. Finally, I designed the second prototype of Streambed VR through a user-centered process, and evaluated its value with and without multisensory cues against traditional PowerPoint training.
DOWNLOAD THE FULL THESIS HERE.
First, I considered differences in background and training between professional and volunteer monitors, synthesizing findings about volunteer training needs. I observed on differences in background and training between professional and volunteer monitors, using these experiences to synthesize findings about volunteer training needs.
Description of StreamBED 1.0
The training environment was developed in the Unity 5 game engine and integrated with the Oculus Rift Head Mounted Display [HMD]. The environment was constructed using brushes, textures, assets and prefabs found in the Unity Asset store and online.
The initial StreamBED VR training trained users to make qualitative assessments of stream habitats through physical exploration. The goal of this study was to consider the viability of the initial training design, and to understand novice learner needs. To accomplish this goal, this study taught participants who are not expert monitors to make and update qualitative assessments of 4 EPA protocol metrics. During training, participants first saw differences in habitat quality by experiencing an ``optimal'' and a ``poor'' habitat stream, then learned to calibrate assessments by evaluating and getting feedback on virtual streams exhibiting diverse quality characteristics.
During the study, participants navigating two tutorial environments: an optimal quality stream featuring an abundance of epifaunal substrate, a large riparian zone, no channel alteration and stable banks; and a poor quality stream with no epifaunal substrate, a lack of a riparian zone, high channel alteration and highly eroded unstable banks. In addition to modeling these measures in the training, additional measures of stream quality were built in, including variability in pool depth, water clarity and vegetation diversity; this was designed to make the experience realistic and challenge participants to make assessments in context of other factors.
My findings reveal that 1) the StreamBED VR training immersed and motivated study participants, inspiring them to 2) interact with the environment through storytelling and virtual surveying. However, pilot findings revealed that 3) participants had trouble with several facets of the training; they were distracted by the rendered training environment, had trouble interpreting protocol subjectivity, and anchored to extreme protocol judgments.
Divergent Re-design Process
The initial prototype revealed several challenges about the training and VR experience. In response, I designed the tool through a divergent multi-step user-centered process that considered different aspects of design and training needs. During the process, I created a list of questions for each component that I wanted to redesign, and generated 3 ideas for each. I used these ideas to construct 3 prototypes, and sketched low-fidelity mockups for each design. I recieved feedback on the mockups, then converged the components into one design, that informed the final prototype.
The final prototype was designed for two participants: one navigating a 360 degree VR environment, and one observing, and making assessments on a desktop.
Before beginning the tutorial, participants received an overview of the Oculus controller and VR interactions. Then, the first VR participant put on Oculus setup, and opened the first tutorial scene from the “Menu” screen. Once the VR participant opened the first scene, they saw a map of the tutorial locations. Then, the VR participant closed the map to see the “immersive view” of the 360 degree stream habitat, and looked around the stream with their partner. After looking around, participants opened their camera, and searched for areas highlighted areas in the camera viewfinder These highlighted areas were either blue (representing stream areas), or yellow (representing bank areas).
Once participants found the highlighted areas in the viewfinder, they took a “snapshot” of the area, zooming in and out to center the area in the viewfinder. After taking the snapshot, a set of keywords appeared on the left side of the viewfinder, stream keywords if the highlighted area was blue, and bank keywords if the area was yellow. After taking a good snapshot of the highlighted area, participants worked together to decide which keywords best represented the highlighted area in the snapshot. For blue stream areas, participants had to decide if the area included any 1) snags and logs, 2) cobble, or 3) underwater vegetation. Similarly, for yellow bank areas, participants had to decide if 1) the bank slope was relatively gentle or steep, 2) the bank was vegetated by grass, plants or trees, and if the bank had any 3) undercuts, 4) crumbling, or 5) exposed tree roots.
Together, participants discussed the tags, made a choice for each keyword category, and saved the tags. In the tutorial, participants received feedback on the tags they had chosen; if they correctly chose tags, the snapshot was saved. However, if participants incorrectly tagged the area, the incorrect keywords turned red, and they were prompted to tag the snapshot again.
Once participants correctly tagged the highlighted area and saved the snapshot, the highlighted area glowed white (instead of blue or yellow), indicating it had been saved. When participants tagged all of the highlighted areas in each 360 degree video, they moved to the next video by aiming at a hotspot on the screen. At the end of the last video, participants were directed to a summary screen that allowed them to see how many times they had tagged each keyword.
During the final study, two participants worked together to assess a virtual stream habitat through a VR and Desktop training application. First, participants learned about 2 rapid bioassessment (RBP) protocols, Epifaunal Substrate and Bank Stability. Then, participants completed a VR training tutorial, learning to identify and tag important areas of the stream habitats with appropriate keywords. After completing the tutorial, participants navigated a Maryland stream habitat in VR, taking snapshots and tagging highlighted areas of streams and banks. After the VR task, participants compared the snapshots they had taken to a series of reference images, and made several assessments of the two protocols. After training, participants made evaluations of the protocols at an outdoor stream habitat on the University of Maryland campus.