To lighten the load of dismounted infantry, research is being conducted into the use of “robotic mules” that can help carry equipment and supplies for small units as they conduct their operations. While autonomy and perception features of robotic systems are steadily improving, the way in which people interact with them is still fairly rudimentary. The operator control units (OCUs) for these robotic mules are typically hand-held gaming controllers for tele-operation, or worn ruggedized portable computers or tablets with point-and-click interfaces. These control devices add more weight to the operator, and often require them to hold the controllers in their hands and to spend considerable time looking down at the OCU screen to enter commands or to understand what the robot is doing. Furthermore, these interfaces often require specialized training to understand how to operate the OCUs. This paper describes research aimed at reducing the physical, cognitive, and training burdens that robotic systems place on operators above and beyond the their regular jobs as warfighters. We first present an analysis of relevant infantry communication to identify interaction requirements, and an analysis of technologies that might be used to support these interactions. We then describe a prototype heads-up, hands-free system for controlling robotic mules using a lightweight, worn interaction device that facilitates natural twoway interaction (including speech and gesture input) between the robotic mule and the user. We describe the challenges in building this system and some formative evaluations of the technology
Military unmanned systems today are typically controlled by two methods: tele-operation or menu-based, search-andclick interfaces. Both approaches require the operator’s constant vigilance: tele-operation requires constant input to drive the vehicle inch by inch; a menu-based interface requires eyes on the screen in order to search through alternatives and select the right menu item. In both cases, operators spend most of their time and attention driving and minding the unmanned systems rather than on being a warfighter. With these approaches, the platform and interface become more of a burden than a benefit. The availability of inexpensive sensor systems in products such as Microsoft Kinect™ or Nintendo Wii™ has resulted in new ways of interacting with computing systems, but new sensors alone are not enough. Developing useful and usable human-system interfaces requires understanding users and interaction in context: not just what new sensors afford in terms of interaction, but how users want to interact with these systems, for what purpose, and how sensors might enable those interactions. Additionally, the system needs to reliably make sense of the user’s inputs in context, translate that interpretation into commands for the unmanned system, and give feedback to the user. In this paper, we describe an example natural interface for unmanned systems, called the Smart Interaction Device (SID), which enables natural two-way interaction with unmanned systems including the use of speech, sketch, and gestures. We present a few example applications SID to different types of unmanned systems and different kinds of interactions.
Unmanned aircraft systems (UASs) have seen a dramatic increase in military operations over the last two decades. The increased demand for their capabilities on the battlefield has resulted in quick fielding with user interfaces that are designed more for engineers in mind than for UAS operators. UAS interfaces tend to support tele-operation with a joystick or complex, menu-driven interfaces that have a steep learning curve. These approaches to control require constant attention to manage a single UAS and require increased heads-down time in an interface to search for and click on the right menus to invoke commands. The time and attention required by these interfaces makes it difficult to increase a single operator’s span of control to encompass multiple UAS or the control of sensor systems. In this paper, we explore an alternative interface to the standard menu-based control interfaces. Our approach in this work was to first study how operators might want to task a UAS if they were not constrained by a typical menu interface. Based on this study, we developed a prototype multi-modal dialogue interface for more intuitive control of multiple unmanned aircraft and their sensor systems using speech and map-based gesture/sketch. The system we developed is a two-way interface that allows a user to draw on a map while speaking commands to the system, and which provides feedback to the user to ensure the user knows what the system is doing. When the system does not understand the user for some reason – for example, because speech recognition failed or because the user did not provide enough information – the system engages with the user in a dialogue to gather the information needed to perform the command. With the help of UAS operators, we conducted a user study to compare the performance of our prototype system against a representative menu-based control interface in terms of usability, time on task, and mission effectiveness. This paper describes a study to gather data about how people might use a natural interface, the system itself, and the results of the user study. Keywords: UAS control, natural interfaces, multi-modal interaction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.