Augmented Cognition

Product not yet rated

Recorded On: 10/05/2020

Designing an Augmented Reality Based Interface for Wearable Exoskeletons
Author(s):Chaitanya Kulkarni Virginia Tech; Hsiang-Wen Hsing Virginia Tech; Dina Kandil Virginia Tech; Shriya Kommaraju Virginia Tech; Nathan Lau Virginia Tech; Divya Srinivasan Virginia Tech
Abstract: Full-body powered wearable exoskeletons combine the capabilities of machines and humans to maximize productivity. Powered exoskeletons can ease industrial workers in manipulating heavy loads in a manner that is difficult to automate. However introduction of exoskeletons may result in unexpected work hazards due to the mismatch between user-intended and executed actions thereby creating difficulties in sensing the physical operational envelope need for increased clearance and maneuverability limitations. To control such hazards this paper presents a rearview human localization augmented reality (AR) platform to enhance spatial awareness of people behind the exoskeleton users. This platform leverages a computer vision algorithm called Monocular 3D Pedestrian Localization and Uncertainty Estimation (MonoLoco) for identifying humans and estimating their distance from a video camera feed and off-the-shelf AR goggles for visualizing the surrounding. Augmenting rear view awareness of humans can help exoskeleton users to avoid accidental collisions that can lead to severe injuries.

Detection and Mitigation of Inefficient Visual Searching
Author(s):Alex Kamrud United States Air Force; Josh Gallaher ; Brett Borghetti Air Force Institute of Technology
Abstract: A commonly known cognitive bias is a confirmation bias: the overweighting of evidence supporting a hypothesis and underweighting evidence countering that hypothesis. Due to high-stress and fast-paced operations military decisions can be affected by confirmation bias. One military decision task prone to confirmation bias is a visual search. During a visual search the operator scans an environment to locate a specific target. If confirmation bias causes the operator to scan the wrong portion of the environment first the search is inefficient. This study has two primary goals: 1) detect inefficient visual search using machine learning and Electroencephalography (EEG) signals and 2) apply various mitigation techniques in an effort to improve the efficiency of searches. Early findings are presented showing how machine learning models can use EEG signals to detect when a person might be performing an inefficient visual search. Four mitigation techniques were evaluated: a nudge which indirectly slows search speed a hint on how to search efficiently an explanation for why the participant was receiving a nudge and instructions to in-struct the participant to search efficiently. These mitigation techniques are evaluated revealing the most effective mitigations found to be the nudge and hint techniques.

Dynamic Causal Modeling of Gender Differences in Emotion: Implications for Augmented Cognition
Author(s):Jiali Huang North Carolina State University; Chang Nam North Carolina State University; Kristen Lindquist
Abstract: The goal of this study is to investigate the neural basis of gender difference in emotion processing. Electroencephalogram (EEG) signals were recorded when the same set of emotion-eliciting images was shown to male and female participants. Neural connections were estimated using Dynamic Causal Modeling (DCM) and results for both genders were compared. We found that dorsolateral prefrontal cortex exerts modulatory effects differently for males and females. These findings on the gender differences in neural mechanisms of emotion processing may be utilized in applications of the augmented cognition program.

Emotion Recognition with a CNN using Functional Connectivity-based EEG Features
Author(s):Chang Nam North Carolina State University; Sanghyun Choo North Carolina State University
Abstract: Emotion recognition plays a pivotal role in our life since it directly affects decision making. To recognize emotion power-based EEG image features have been used for a Convolutional Neural Network (CNN) classifier. However the power-based EEG features use spectral information without considering information flows between channels. To overcome the limitation of the power-based EEG features for emotion classification we propose a CNN-based emotion recognition using Functional Connectivity (FC)-based EEG feature including spatial spectral and temporal information. Forty-three participants engaged in an International Affective Picture System (IAPS)-based emotion experiment that included three emotions (fear sad neutral). The proposed framework was tested in the following two cases in within-subject and cross-subject: (1) binary-class (negative neutral) (2) multi-class (fear sad neutral) with FC-based EEG features according to frequency bands (?????). The results of the study showed that the CNN classifier using an FC-based EEG feature with alpha oscillation had the highest classification accuracy.

How Long Can a Driver (Safely) Glance at an Augmented-Reality Head-Up Display?
Author(s):Nayara De Oliveira Faria Virginia Tech; Joseph Gabbard Virginia Tech
Abstract: Augmented-Reality (AR) head-up display (HUD) is one of the promising solutions to reduce distraction potential while driving and performing secondary visual tasks; however we currently donÂ’t know how to effectively evaluate interfaces in this area. In this study we show that current visual distraction standards for evaluating in-vehicle displays may not be applicable for AR HUDs. We provide evidence that AR HUDs can afford longer glances with no decrement in driving performance. We propose that the selection of measurement methods for driver distraction research should be guided not only by the nature of the task under evaluation but also by the properties of the method itself.

More than Means: Characterizing Individual Differences in Pupillary Dilations
Author(s):Ciara Sibley Naval Research Laboratory; Cyrus Foroughi Naval Research Laboratory; Noelle Brown Naval Research Laboratory; Henry Phillips NAMI; Sabrina Drollinger ; Michael Eagle ; Joseph Coyne Naval Research Laboratory
Abstract: This study sought to characterize individual differences in pupillary dilations during a simple cognitive task. Eighty-four Navy and Marine Corps student pilots performed a digit memory recall test while their pupillary data were recorded. Results showed that peak pupil sizes significantly increased with difficulty of the memory task however variability in pupillary dilations was substantial with only 51% of individualsÂ’ data corresponding with the aggregate results and dilations varying between participants by as much as 1 millimeter. The analyses presented in this paper illustrate the large individual variability that exists in pupil data between individuals and even within individuals on a trial by trial basis. This work serves as a benchmark for understanding variability in pupillary dilations and encourages follow on work to explore casual mechanisms of differences in pupil dilations across individuals especially before using pupil data for applied purposes.

Key:

Complete
Failed
Available
Locked
Augmented Cognition
Recorded 10/05/2020
Recorded 10/05/2020 Augmented Cognition