Augmented Reality in a Human Factors World
Recorded On: 10/06/2020
We Didnt Catch That! Using Voice Text Input on a Mixed-Reality Headset in Noisy Environments
Author(s):Jessyca Derby Embry-Riddle Aeronautical University.; Emily Rickel Embry-Riddle Aeronautical University; Kelly Harris Embry-Riddle Aeronautical University; Jade Lovell Embry-Riddle Aeronautical University; Barbara Chaparro Embry-Riddle Aeronautical University
Abstract: The Microsoft HoloLens a mixed reality head-mounted display (HMD) has been demonstrated in domains such as medicine engineering and manufacturing. In order to interact with the device voice input may be required. Given this range of environments it is necessary to understand the impact of noise on voice dictation speed and accuracy. In this study we evaluated the dictation feature of the HoloLens through speed (WPM) accuracy (WER) perceived workload and perceived usability at three different noise levels: 40 dB 55 dB and 70 dB. No differences were found across noise levels in speed (67-75 WPM) or perceived workload. Accuracy and perceived usability worsened in the 70 dB noise condition. Only 37.5% of participants were able to successfully dictate in the 70 dB condition. This study shows that if the HoloLens is to be accepted in environments with high noise levels improvements to dictation need to be made.
Can Augmented Reality Assist Data Entry Task? A Preliminary Study
Author(s):Taylor Huynh University of Illinois at Chicago; Myunghee Kim University of Illinois at Chicago; Andrew Johnson University of Illinois at Chicago; Heejin Jeong University of Illinois at Chicago; Ankit Singh University of Illinois at Chicago
Abstract: Data entry is considered to be one of the common and essential tasks in workplaces. Almost every industry has a data entry department which is responsible for entering data into the database of the companys systems. Data entry operators are required to input data from a handwritten paper into the computer systems using a keyboard and a mouse. The repetitive nature of the job that occurs in this scenario from looking at the paper and onto the computer screen induces fatigue in the operators. A study has shown that prolonged work creates cognitive fatigue that affects the cognitive components of the data entry process (Healy et al. 2004). The goal of the project described in this paper is to explore various methods and interfaces for data entry. We report the evaluation results of a data presentation interface introduced in Jeong et al. (2020). This interface was developed using a wearable augmented reality (AR) heads-up display. In the current study the interface was compared with two other interfaces for data presentation which include an extra monitor and a hand-written paper (as a baseline). Eighteen participants were asked to enter the information displayed to them on a separate stand-alone laptop using a keyboard. The participants were told that their performance would be judged based on two parameters: (1) how fast they can complete their tasks (time required to complete the task) and (2) how many errors they commit while typing the information onto the laptop. Better task performance is reflected by the lower values of the two parameters. The participants were presented with different interfaces in a random order to minimize bias. Participants were asked to fill out a NASA-TLX survey and a post-task survey for subjective evaluations of helpfulness preference and ease of use. It was found that AR was not as good for participants as we expected. Participants experienced various difficulties with the current AR interface as this was a novel method of data presentation and participants were more comfortable with something they had more experience with. An interesting take-away from this study was that the AR device performed equally well as the conventional paper-based data presentation method. After conducting this study it was inferred that an AR device could potentially be a good data presentation interface with slight adjustments in its weight and field of view as suggested by the participants.
Effect of Head-Mounted Augmented Reality Devices on Electric Utility Manhole Workers: Neck Muscle Activity and Eye Blink Rate
Author(s):Ashley Toll Milwaukee Tool; Richard Marklin Marquette University; Eric Bauman Electric Power Research Institute; John Simmons Alfred University
Abstract: Two head-mounted augmented reality (AR) systems Microsoft HoloLens and Real-Wear HMT-1 were tested to determine their effect on blink rate and muscle activity of the neck and shoulder muscles of electric utility manhole workers. The task of splicing a cable was performed under three conditions: HoloLens HMT-1 and No AR (normal). Surface electromyography (sEMG) of the right and left sternocleidomastoid splenius semispinalis capitis and upper trapezius muscles were measured on 13 manhole workers and a small camera recorded blink rate of the right eye. Results revealed in general no significant dif-ferences in 50th and 90th percentile sEMG between the three conditions. There was no dif-ference in blink rate between the HMT-1 and No AR but the HoloLens blink rate was 7.8 to 11 blinks/min lower than the HMT-1 for two of the three tasks. A decrease in blink rate of these magnitudes may indicate risk of eye strain to manhole workers who use an OST AR device without appropriate rest breaks. Head-mounted AR devices deployed for under-ground utility workers warrant further study.
Investigating a Virtual Reality-based Emergency Response Scenario and Intelligent User Interface for First Responders
Author(s):Randall Spain Center for Educational Informatics NCSU; Jason Saville Center for Educational Informatics NCSU; Barry Liu NCSU; Donia Slack RTI; Edward Hill RTI International; John Holloway RTI; Sarah Norsworthy RTI; Bradford Mott Center for Educational Informatics NCSU; James Lester Center for Educational Informatics NCSU
Abstract: Virtual reality offers new opportunities to develop and test technology for first responders. Because advances in broadband capabilities will soon allow first responders to access and use many forms of data it is critically important to design head-mounted displays to present first responders with information in a manner that does not induce extraneous mental workload or cause undue system interaction errors. In this paper we describe the development of a virtual reality-based emergency response scenario that was designed to support user experience research for evaluating the efficacy of intelligent user interfaces for firefighters. We describe the results of a usability test that captured firefighters feedback and reactions to the VR scenario and prototype intelligent user interface that presented task critical information through the VR headset and conclude with lessons learned from our development process and plans for future research.
Predicting User Performance in Augmented Reality User Interfaces with Image Analysis Algorithms
Author(s):Jonathan Flittner Virginia Polytechnic Institute and State University; John Luksas Virginia Polytechnic and State Institution; Joseph Gabbard Virginia Tech
Abstract: This study applies existing image analysis measures of visual clutter to augmented reality user interfaces and explores other factors on performance; virtual object percentage target object type (real or virtual) and target object clutter. The end goal of this research is to develop an algorithm capable of predicting user performance. Results show significant differences in response time between clutter levels and between virtual object percentage but not target type. Participants consistently had more difficulty finding objects in more cluttered scenes where clutter was determined through image analysis methods and had more difficulty finding virtual of objects when the search area was 50% virtual as opposed to other scenarios. Response time positively correlated to measures of combined clutter (virtual and real) arrays but not for measures of clutter taken of the individual array components (virtual or real) and positively correlated with the clutter scores of the target objects themselves.
The Effects of Target Sizes on Biomechanical Exposures and Perceived Workload during Virtual and Augmented Reality Interactions
Author(s):Kiana Kia Oregon State University; Nizam Hakim MD Ishak Oregon State University; Jaejin Hwang Northern Illinois University; Jeong Ho Kim Oregon State University
Abstract: This repeated-measures laboratory study evaluated and compared muscle activity and postures of the neck and right shoulder as well as NASA TLX perceived workload while a total of 12 participants performed standardized virtual reality (VR) and augmented reality (AR) tasks (Omni-directional pointing square coloring and 3-dimensional cube placement tasks) with three different target sizes. The results showed that AR/VR interactions posed relatively high neck/shoulder muscle activity and shoulder flexion which was in line with moderate-to-high perceived workload (i.e. physical demand and effort measures). The results also showed that target sizes affected these biomechanical and perceived workload measures with a different degree between VR and AR tests. These results indicate that prolonged VR/AR interactions may increase risks for musculoskeletal discomfort in the neck and shoulders. Lastly a target size may be an important design factor in designing AR/VR interfaces to reduce potential neck and should strain.