The evolution of robots from tools to teammates will require them to derive meaningful information about the world around them, translate knowledge and skill into effective planning and action based on stated goals, and communicate with human partners in a natural way. Recent advances in foundation models, large pre-trained models such as large language models and visual language models, will help enable these capabilities. We describe how we are using open-vocabulary 3D scene graphs based on foundation models to add scene understanding and natural language interaction to our human-robot teaming research. Open-vocabulary scene graphs enable a robot to build and reason about a semantic map of the environment, as well as answer complex queries about it. We are exploring how semantic scene information can be shared with human teammates and inform context-aware decision making and planning to improve task performance and increase autonomy. We highlight human-robot teaming scenarios involving robotic casualty evacuation and stealthy movement through an environment that could benefit from enhanced scene understanding, describe our approach to enabling this enhanced understanding, and present preliminary results using a one-armed quadruped robot interacting with simplified environments. It is anticipated that advanced perception and planning capabilities provided by foundation models will give robots the ability to better understand their environment, share that information with human teammates, and generate novel courses of action.
Effective communication and control of a team of humans and robots is critical for a number DoD operations and scenarios. In an ideal case, humans would communicate with the robot teammates using nonverbal cues (i.e., gestures) that work reliably in a variety of austere environments and from different vantage points. A major challenge is that traditional gesture recognition algorithms using deep learning methods require large amounts of data to achieve robust performance across a variety of conditions. Our approach focuses on reducing the need for “hard-to-acquire” real data by using synthetically generated gestures in combination with synthetic-to-real domain adaptation techniques. We also apply the algorithms to improve the robustness and accuracy of gesture recognition from shifts in viewpoints (i.e., air to ground). Our approach leverages the soon-to-be released dataset called Robot Control Gestures (RoCoG-v2), consisting of corresponding real and synthetic videos from ground and aerial viewpoints. We first demonstrate real-time performance of the algorithm running on low-SWAP, edge hardware. Next, we demonstrate the ability to accurately classify gestures from different viewpoints with varying backgrounds representative of DoD environments. Finally, we show the ability to use the inferred gestures to control a team of Boston Dynamic Spot robots. This is accomplished using inferred gestures to control the formation of the robot team as well as to coordinate the robot’s behavior. Our expectation is that the domain adaptation techniques will significantly reduce the need for real-world data and improve gesture recognition robustness and accuracy using synthetic data.
Effective human-robot teaming requires human and robot teammates to share a common understanding of the goals of their collaboration. Ideally, a complex task can be broken into smaller components to be performed by team members with defined roles, and the plan of action and assignment of roles can be changed on the fly to accommodate unanticipated situations. In this paper we describe research on adaptive human-robot teaming that uses a playbook approach to team behavior to bootstrap multi-agent collaboration. The goal is to leverage known good strategies for accomplishing tasks, such as from training and operating manuals, to enable humans and robots to “be on the same page” and work from a common knowledge base. Simultaneous and sequential actions are specified using hierarchical text-based plans and executed as behavior trees using finite state machines. We describe a real-time implementation that supports sharing of task status updates through distributed message passing. Tasks related to human-robot teaming for exploration and reconnaissance are explored with teams comprising humans wearing augmented reality headsets and quadruped robots. It is anticipated that shared task knowledge provided by multi-agent playbooks will enable humans and robots to track and predict teammate behavior and promote team transparency, accountability and trust.
Control of current tactical unmanned ground vehicles (UGVs) is typically accomplished through two alternative modes of operation, namely, low-level manual control using joysticks and high-level planning-based autonomous control. Each mode has its own merits as well as inherent mission-critical disadvantages. Low-level joystick control is vulnerable to communication delay and degradation, and high-level navigation often depends on uninterrupted GPS signals and/or energy-emissive (non-stealth) range sensors such as LIDAR for localization and mapping. To address these problems, we have developed a mid-level control technique where the operator semi-autonomously drives the robot relative to visible landmarks that are commonly recognizable by both humans and machines such as closed contours and structured lines. Our novel solution relies solely on optical and non-optical passive sensors and can be operated under GPS-denied, communication-degraded environments. To control the robot using these landmarks, we developed an interactive graphical user interface (GUI) that allows the operator to select landmarks in the robot’s view and direct the robot relative to one or more of the landmarks. The integrated UGV control system was evaluated based on its ability to robustly navigate through indoor environments. The system was successfully field tested with QinetiQ North America’s TALON UGV and Tactical Robot Controller (TRC), a ruggedized operator control unit (OCU). We found that the proposed system is indeed robust against communication delay and degradation, and provides the operator with steady and reliable control of the UGV in realistic tactical scenarios.
The All-Terrain Biped (ATB) robot is an unmanned ground vehicle with arms, legs and wheels designed to drive, crawl,
walk and manipulate objects for inspection and explosive ordnance disposal tasks. This paper summarizes on-going
development of the ATB platform. Control technology for semi-autonomous legged mobility and dual-arm dexterity is
described as well as preliminary simulation and hardware test results. Performance goals include driving on flat terrain,
crawling on steep terrain, walking on stairs, opening doors and grasping objects. Anticipated benefits of the adaptive
mobility and dexterity of the ATB platform include increased robot agility and autonomy for EOD operations, reduced
operator workload and reduced operator training and skill requirements.
The agility and adaptability of biological systems are worthwhile goals for next-generation unmanned ground vehicles.
Management of the requisite number of degrees of freedom, however, remains a challenge, as does the ability of an
operator to transfer behavioral intent from human to robot. This paper reviews American Android research funded by
NASA, DARPA, and the U.S. Army that attempts to address these issues. Limb coordination technology, an iterative
form of inverse kinematics, provides a fundamental ability to control balance and posture independently in highly
redundant systems. Goal positions and orientations of distal points of the robot skeleton, such as the hands and feet of a
humanoid robot, become variable constraints, as does center-of-gravity position. Behaviors utilize these goals to
synthesize full-body motion. Biped walking, crawling and grasping are illustrated, and behavior parameterization,
layering and portability are discussed. Robotic skill acquisition enables a show-and-tell approach to behavior
modification. Declarative rules built verbally by an operator in the field define nominal task plans, and neural networks
trained with verbal, manual and visual signals provide additional behavior shaping. Anticipated benefits of the resultant
adaptive collaborative controller for unmanned ground vehicles include increased robot autonomy, reduced operator
workload and reduced operator training and skill requirements.
This paper describes an approach to robotic control patterned after models of human skill acquisition and the organization of the human motor control system. The intent of the approach is to develop autonomous robots capable of learning complex tasks in unstructured environments through rule-based inference and self-induced practice. Features of the human motor control system emulated include a hierarchical and modular organization antagonistic actuation and multi-joint motor synergies. Human skill acquisition is emulated using declarative and reflexive representations of knowledge feedback and feedforward implementations of control and attentional mechanisms. Rule-based systems acquire rough-cut task execution and supervise the training of neural networks during the learning process. After the neural networks become capable of controlling system operation reinforcement learning is used to further refine the system performance. The research described is interdisciplinary and addresses fundamental issues in learning and adaptive control dexterous manipulation redundancy management knowledge-based system and neural network applications to control and the computational modelling of cognitive and motor skill acquisition. 296 / SPIE Vol. 1294 Applications of Artificial Neural Networks (1990)
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.