Value of virtual affordances

Since January 2013, I’ve been working on a study that is seeking to develop sensor-based and sensor-free detectors of affect for the intelligent tutoring system GIFT developed by the Army Research Lab (ARL). In this study, our central learning tool is the serious video game vMedic, which is a bit like the commercial game “Call of Duty.” Unlike “Call of Duty,” vMedic is a training simulation that seeks to support the warfighter’s learning about how to administer hemorrhage control and bleeding care while under fire in a combat zone.

One of the objectives of this game is to simulate the complications that often accompany administering care in a hostile environment. To accomplish this simulation, the developers designed the graphics, dialogue, and a sound design to replicate a real combat environment. This replication is important to promote the transfer of newly acquired skills and protocols necessary for warfighters to implement in a real world context.

Since the beginning of this project, I thought the care under fire training would be significantly improved if it could be made a more immersive environment. Using head-mounted gear like the Occulus Rift would go a long way in supporting a more immersive environment, removing the distancing effect (Rigby & Ryan, 2011) between the participant and the game environment. It would also support greater attentional focus, eliminating any real world interferences that might occur when just looking at a computer screen, e.g., a temptation to look around the room, check one’s phone, etc.

However, implementing an Oculus Rift would also need to be accompanied by the use of motion sensor equipment, like a Wii or a Kinect. To replicate a real world experience (Dalgarno & Lee, 2010), one would need to facilitate more natural gestures and movements as opposed to using a keyboard and mouse. This notion speaks directly to supporting authenticity in virtual worlds where experiences in the fiction of VR worlds are consistent with our real world experiences and understandings (Rigby & Ryab, 2011).

For the purposes of training warfighters for emergency response situations, doing so in a virtual world is superior to training through an augmented reality platform. The kind of emergency response training necessary to prepare warfighters is extraordinarily complex and costly to simulate in the real world – even with the assistance of a hand held device that could turn transform real world simulations into an augmented reality training platform. The beauty of the immersive virtual environment is that once the VR program has been developed, it can be utilized over and again by a number of participants in a variety of locations around the world. Contrast that to an augmented reality experience where there would still need to be some real world setup, and the complications of cost and complexity of execution still are major hindering factors (Dunleavy, Dede & Mitchell, 2009). Additionally, an AR training experience that still relied on some real world set up would not be easily transportable to military bases around the world, even if a portion of it could be offset to a handheld or some other portable technical device.

The objective of vMedic is to ensure that by the end of the training experience, warfighters should be significantly better equipped and prepared to respond without hesitation in a medical crisis situation. Employing a VR design to training for these kinds of crisis situations would arguably go a long way in supporting the depth of processing needed to master new procedural and domain skills in medical care, providing a sort of test run for a real world crisis situation.

While this is not something that is currently within the auspices of our current study with ARL, it is something that I believe warrants further investigation and empirical analysis. Intuitively speaking, there is something very appealing to being able to acquire life and death skills without the additional stress of actually being responsible for the life or death of a real person. I know the idea of test running potentially stressful and dire circumstances in my own life would be a most welcomed experience.

Works Cited

Dalgarno, B., & Lee, M. J. W. (2010). What are the learning affordances of 3-D virtual environments? British Journal of Educational Technology, 41(1), 10-32. doi: 10.1111/j.1467-8535.2009.01038.

Dunleavy, M., Dede, C., & Mitchell, R. (2009). Affordances and Limitations of Immersive Participatory Augmented Reality Simulations for Teaching and Learning. Journal of Science Education & Technology,      18, 7-22.

Rigby, S., & Ryan, R. M. (2011). Chapter 5. Immersion and Presence. In Glued to Games. How video games draw us in and hold us spell  bound.

3 thoughts on “Value of virtual affordances

  1. Hi Janine, Vmedic sounds quite interesting..I’d love to see a demo sometime if that is possible.

    Now for the hard part. With respect to our discussion of distractions…I’m wondering about the potential risks of complacency in an battlefield environment like that ritualized in vMedic? When I play a game like “World of Tanks” I take much more risk with my tank (Avatar) than I expect would be wise if I was really on a battlefield and “playing for keeps”.

    Could you share if or how vMedic is providing any affordances to scaffold the constant situation awareness that I expect is critical, awareness not only of the state of the patient, but of the state of the battlefield and what options are available for care and or extraction?

    I can certainly see lots of potential for a tool like vMedic, but like low fidelity flight simulators that dont properly model the loss of control possible if you a pilot isnt paying attention to airspeed and coordinated flight, there is always the chance the sim can foster bad habits if careful attention to details… like keeping your head down while attending to a patient is not integrated into the sim.

    thoughts?

    J

    • Hello, John.

      Here is a link to a demo from 2011 on vMedic: https://www.youtube.com/watch?v=pXFNMGPAiEk

      In regards to the issue of complacency, I don’t know if that is the central concern for this type of serious game as opposed to sustaining engagement and motivation. Perhaps we mean the same thing, but I’ve not seen the idea of complacency in any of the literature for serious games.

      But in regards to sustaining engagement and motivation, for vMedic it is accomplished through the design of the game (varying levels of difficulty and skill acquisition mastery) as well as integrating an embedded pedagogical agent/tutor into the system. The function of the EPA/tutor, which we are still developing, is to provide feedback on actions taken and help focus the learner on acquiring the necessary domain knowledge and procedural skills and mitigate any complacency, gaming of the system, or WTF (without thinking fastidiously) behavior. Gaming the system includes exploiting properties of the system’s help and feedback rather than by attempting to learn the material, (Baker et al, 2009). WTF behavior is characterized as extreme off-task behavior in a serious game environment.

      EPA/tutors in a serious game environment are used to intervene by way of feedback messages in situations where the system detects off-task behavior, WTF behavior, confusion, boredom, frustration. For vMedic, we are building sensor-based and sensor-free affect detectors that would identify specifically when a participant is frustrated. EPA/tutor would deliver a motivational feedback message to encourage the participant to push through their frustration and continue to engage in the game until they have mastered the educational objectives at each level. I’m presently examining how the design of a feedback message can accomplish this objective. This work is ongoing with a planned study to evaluate the effectiveness of these feedback messages in September 2015.

      So the short answer to your question is: the use of an embedded tutor that provides feedback messages responding to the actions and affect of the participant is the affordance to scaffold and support constant situation awareness.

      — jeanine

      —-
      Works cited

      Baker, R., Walonoski, J., Heffernan,N., Roll, I., Corbett, A., & Koedinger, K. (2009). Why Students Engage in “Gaming the System” Behavior in Interactive Learning Environments. Journal of Interactive Learning Research, 19(2), 185-224.

  2. I think what John was referring with the issue of complacency is taking high risks as a player in a game that you would not take in real life because such actions. For instance, when driving a car in a game, I would be ok to have an accident to just see what will happen whereas people usually think like that in daily life. In vMedic, with the tutors and the feedback system, it seems like such behavior is discouraged if not blocked.

    It would be great to conduct empirical studies to compare the effectiveness of vMedic virtual training to a live action role playing scenario (perhaps facilitated by AR) on transfer of skills and learning of conceptual and procedural knowledge. Keep us updated.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s