The information that decision-makers use for command and control has uncertainty. Previous work has described different types of uncertainties, and the methods for using information to evaluate and rank alternative courses of action vary based on the type of uncertainty that occurs. Thus, when developing ways to generate automated decisions to support Soldier tactical planning, including multi-criteria decision making (MCDM) with different types (“modalities”) of uncertain information, no single method or algorithm will be optimal for all situations. Metareasoning is reasoning about reasoning, which is a type of self-adaptation, and it has been closely studied in AI and in logic due to its relevance to autonomous decision making; it is also of interest in cognitive science under the rubric of executive reasoning. A software agent or autonomous system can use metareasoning to monitor and control the procedures that it uses to process sensor information, evaluate potential courses of action, and plan its actions. This concept paper presents a metareasoning framework that can enhance artificial reasoning about uncertain information in the context of generating and ranking alternative courses of action. In this framework, the decision support agents will use rules to select the MCDM algorithms that are most appropriate for the type of information and the uncertainty modalities that are present. The rules may be curated by experts or generated from machine learning algorithms. We expect that using metareasoning will improve the ability to make complicated decisions with uncertain information.
A barrier to developing novel AI for complex reasoning is the lack of appropriate wargaming platforms for training and evaluating AIs in a multiplayer setting combining collaborative and adversarial reasoning under uncertainty with game theory and deception. An appropriate platform has several key requirements including flexible scenario design and exploration, extensibility across all five elements of Multi-Domain Operations (MDO), and capability for human-human and human-AI collaborative reasoning and data collection, to aid development of AI reasoning and the warrior-machinelike interface. Here, we describe the ARL Battlespace testbed which fulfills the above requirements for AI development, training and evaluation. ARL Battlespace is offered as an open source software platform (https://github.com/USArmyResearchLab/ARL_Battlespace). We present several example scenarios implemented in ARL Battlespace that illustrate different kinds of complex reasoning for AI development. We focus on ‘gap’ scenarios that simulate bridgehead and crossing tactics, and we highlight how they address key platform requirements including coordinated MDO actions, game theory and deception. We describe the process of reward shaping for these scenarios that will incentivize an agent to perform command and control (C2) tasks informed by human commanders’ courses of action, as well as the key challenges that arise. The intuition presented will enable AI researchers to develop agents that will provide optimal policies for complex scenarios.
Previous research has shown that many luminance normalization mechanisms are engaged when viewing scenes with high dynamic range (HDR) luminance. In one such phenomenon, areas of similar luminance contextually facilitate the perception of ambiguous textures. Using inspiration from biological circuitry, we developed a recurrent spiking neural network that reproduces experimental results of contextual facilitation in HDR images. The network uses correlations between luminance and texture to correctly classify and segment ambiguous textures in images. While many deep neural networks can successfully perform many types of image analysis, they have limited ability to process images under naturalistic HDR illumination, requiring millions of neurons and power hungry GPUs. It is an open question if a recurrent spiking neural network can minimize the number of neurons required to perform HDR image segmentation based on texture. To that end, we designed a biologically inspired proof-of-concept recurrent SNN that can perform such a task. The network is implemented using leaky integrate-and-fire neurons, with CuBa synapses. We use the Nengo LOIHI API to simulate the network, so it can be run on Intel’s LOIHI neuromorphic hardware. The network uses a highly recurrent structure to both group image elements based on luminance and texture, and to seamlessly combine these modalities to correctly segment ambiguous textures. Furthermore, we can continuously modulate how much luminance or texture contribute to the segmentation. We surmise that further development of this network will improve the resilience of optical flow computations under environments with complex naturalistic illumination.
Future Multi Domain Operation (MDO) wargaming will rely on Artificial Intelligence/Machine Learning (AI/ML) algorithms to aid and accelerate complex Command and Control decision-making. This requires an interdisciplinary effort to develop new algorithms that can operate in dynamic environments with changing rules, uncertainty, individual biases, changing cognitive states, as well as the capability to rapidly mitigate unexpected hostile capabilities and exploit friendly technological capabilities. Building on recent advancements in AI/ML algorithms, we believe that new algorithms for learning, reasoning under uncertainty, game theory with three or more players, and interpretable AI can be developed to aid in complex MDO decision-making. To achieve these goals, we developed a new flexible MDO warfighter machine interface game, Battlespace, to investigate and understand how human decision-making principles can be leveraged by and synergized with AI. We conducted several experiments with human vs. random players operating in a fixed environment with fixed rules, where the overall goal of the human players was to collaborate to either capture the opponents’ flags or eliminate all of their units. Then, we analyzed the evolution of the games and identified key features that characterized the human players’ strategies and their overall goal. We then followed a Bayesian approach to model the human strategies and developed heuristic strategies for a simple AI agent. Preliminary analysis revealed that following the human agents’ strategy in the capture the flag games produced the greatest winning percentage and may be useful for gauging the value of intermediate game states for developing the coordinated action planning of reinforcement learning algorithms.
Current paradigms for collecting data to train fieldable computer vision (CV) algorithms are inefficient and expensive in terms of time and resources, and they have limited ability to adapt to changing target signatures. By leveraging opportunistic sensing of the operator’s natural behavior (e.g., firing a weapon, placing markers on a map), it is possible to triage the data for CV algorithms. When paired with an After Action Review (AAR) to confirm target presence and location, the resulting pipeline can efficiently update the CV for changing target signatures. Subjects (n = 10) participated in a simulated mission in which they were asked to mark video frames in which targets belonging to three classes first appeared, then perform a brief AAR in which they were asked to mark target locations in these frames. The CV (Detectron2) was initially trained on one target class, then retrained on instances of a novel class acquired by this pipeline. We report that retraining the CV on as few as 8 images resulted in good localization performance (AP50 85.6, AR 42.5) on unseen test images to this novel class. Ten minutes of retraining (600 iterations) was sufficient for good performance (AP50 58.3, AR 27.5). Data augmentation via random occlusions and apertures (‘bubbling’) boosted the training set 192-fold and improved the ratio of hits to false alarms and improved resilience to naturalistic occlusion and small sizes (AP 80-100 and AR 80). These results support our approach as an efficient method to adapt CV via partially labeled operational data.
Brent Lance, Gabriella Larkin, Jonathan Touryan, Joe Rexwinkle, Steven Gutstein, Stephen Gordon, Osben Toulson, John Choi, Ali Mahdi, Chou Hung, Vernon Lawhern
The application of Artificial Intelligence and Machine Learning (AI/ML) technologies to Aided Target Recognition (AiTR) systems will significantly improve target acquisition and engagement effectiveness. Although, the effectiveness of these AI/ML technologies is based on the quantity and quality of labeled training data, there is very limited labeled operational data available. Creating this data is both time-consuming and expensive, and AI/ML technologies can be brittle and unable to adapt to changing environmental conditions or adversary tactics that are not represented in the training data. As a result, continuous operational data collection and labeling are required to adapt and refine these algorithms, but collecting and labeling operational data carries potentially catastrophic risks if it requires Soldier interaction that degrades critical task performance. Addressing this problem to achieve robust, effective AI/ML for AiTR requires a multi-faceted approach integrating a variety of techniques such as generating synthetic data and using algorithms that learn on sparse and incomplete data. In particular, we argue that it is critical to leverage opportunistic sensing: obtaining operational data required to train and validate AI/ML algorithms from tasks the operator is already doing, without negatively affecting performance on those tasks or requiring any additional tasks to be performed. By leveraging the Soldier’s substantial skills, capabilities, and adaptability, it will be possible to develop effective and adaptive AI/ML technologies for AiTR in the future Multi- Domain Operations (MDO) battlefield.
KEYWORDS: Visualization, Augmented reality, Transparency, Visibility, High dynamic range imaging, Target recognition, Heads up displays, Projection systems, LCDs, Military display technology, Contrast sensitivity, Tracking and scene analysis
Commercial augmented reality (AR) and mixed reality (MR) displays are designed for indoor nearfield gaming tasks. Conversely, outdoor warfighter tasks have a different set of needs for optimal performance, e.g. for aided target recognition (AiTR). The information display needs to be visible across a wide luminance range, from mesopic to photopic (0.001 to 100,000 cd/m2), including maxto- min luminance ratio exceeding 10,000-to-1 within a single scene (high dynamic range luminance, HDR). The information display also should not distract from other tasks, a difficult requirement because the saliency of the information display depends on its relationship to the HDR background texture. We suggest that a transparency-adjustable divisive display AR (ddAR) could achieve these luminance and saliency needs, with potentially less complexity and processing power than current additive displays. We report measurements of acuity to predict how such ddAR might affect low contrast visibility under gaze shifts, which often result in 10- or 100-fold changes in luminance. We developed an HDR display projection system with up to 100,000-to-1 luminance contrast ratio and assessed how luminance dynamics affect acuity to semi-transparent letters against a uniform background. Immediately following a luminance flash, visual acuity is unaffected at 20% letter contrast, and it is only weakly affected at 10% letter contrast (+0.10 and +0.12 LogMAR for flashes of 25× and 100× luminance). The resilience of low contrast letter acuity across luminance changes suggests that soft highlighting and ddAR could effectively convey information, to improve AiTR under real world luminance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.