Since January 2013, I’ve been working on a study that is seeking to develop sensor-based and sensor-free detectors of affect for the intelligent tutoring system GIFT developed by the Army Research Lab (ARL). In this study, our central learning tool is the serious video game vMedic, which is a bit like the commercial game “Call of Duty.” Unlike “Call of Duty,” vMedic is a training simulation that seeks to support the warfighter’s learning about how to administer hemorrhage control and bleeding care while under fire in a combat zone.
One of the objectives of this game is to simulate the complications that often accompany administering care in a hostile environment. To accomplish this simulation, the developers designed the graphics, dialogue, and a sound design to replicate a real combat environment. This replication is important to promote the transfer of newly acquired skills and protocols necessary for warfighters to implement in a real world context.
Since the beginning of this project, I thought the care under fire training would be significantly improved if it could be made a more immersive environment. Using head-mounted gear like the Occulus Rift would go a long way in supporting a more immersive environment, removing the distancing effect (Rigby & Ryan, 2011) between the participant and the game environment. It would also support greater attentional focus, eliminating any real world interferences that might occur when just looking at a computer screen, e.g., a temptation to look around the room, check one’s phone, etc.
However, implementing an Oculus Rift would also need to be accompanied by the use of motion sensor equipment, like a Wii or a Kinect. To replicate a real world experience (Dalgarno & Lee, 2010), one would need to facilitate more natural gestures and movements as opposed to using a keyboard and mouse. This notion speaks directly to supporting authenticity in virtual worlds where experiences in the fiction of VR worlds are consistent with our real world experiences and understandings (Rigby & Ryab, 2011).
For the purposes of training warfighters for emergency response situations, doing so in a virtual world is superior to training through an augmented reality platform. The kind of emergency response training necessary to prepare warfighters is extraordinarily complex and costly to simulate in the real world – even with the assistance of a hand held device that could turn transform real world simulations into an augmented reality training platform. The beauty of the immersive virtual environment is that once the VR program has been developed, it can be utilized over and again by a number of participants in a variety of locations around the world. Contrast that to an augmented reality experience where there would still need to be some real world setup, and the complications of cost and complexity of execution still are major hindering factors (Dunleavy, Dede & Mitchell, 2009). Additionally, an AR training experience that still relied on some real world set up would not be easily transportable to military bases around the world, even if a portion of it could be offset to a handheld or some other portable technical device.
The objective of vMedic is to ensure that by the end of the training experience, warfighters should be significantly better equipped and prepared to respond without hesitation in a medical crisis situation. Employing a VR design to training for these kinds of crisis situations would arguably go a long way in supporting the depth of processing needed to master new procedural and domain skills in medical care, providing a sort of test run for a real world crisis situation.
While this is not something that is currently within the auspices of our current study with ARL, it is something that I believe warrants further investigation and empirical analysis. Intuitively speaking, there is something very appealing to being able to acquire life and death skills without the additional stress of actually being responsible for the life or death of a real person. I know the idea of test running potentially stressful and dire circumstances in my own life would be a most welcomed experience.
Dalgarno, B., & Lee, M. J. W. (2010). What are the learning affordances of 3-D virtual environments? British Journal of Educational Technology, 41(1), 10-32. doi: 10.1111/j.1467-8535.2009.01038.
Dunleavy, M., Dede, C., & Mitchell, R. (2009). Affordances and Limitations of Immersive Participatory Augmented Reality Simulations for Teaching and Learning. Journal of Science Education & Technology, 18, 7-22.
Rigby, S., & Ryan, R. M. (2011). Chapter 5. Immersion and Presence. In Glued to Games. How video games draw us in and hold us spell bound.