Paper
25 May 2016 Human assisted robotic exploration
B. T. Files, J. Canady, G. Warnell, E. Stump, W. D. Nothwang, A. R. Marathe
Author Affiliations +
Abstract
In support of achieving better performance on autonomous mapping and exploration tasks by incorporating human input, we seek here to first characterize humans’ ability to recognize locations from limited visual information. Such a characterization is critical to the design of a human-in-the-loop system faced with deciding whether and when human input is useful. In this work, we develop a novel and practical place-recognition task that presents humans with video clips captured by a navigating ground robot. Using this task, we find experimentally that human performance does not seem to depend on factors such as clip length or familiarity with the scene and also that there is significant variability across subjects. Moreover, we find that humans significantly outperform a state-of-the-art computational solution to this problem, suggesting the utility of incorporating human input in autonomous mapping and exploration techniques.
© (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
B. T. Files, J. Canady, G. Warnell, E. Stump, W. D. Nothwang, and A. R. Marathe "Human assisted robotic exploration", Proc. SPIE 9836, Micro- and Nanotechnology Sensors, Systems, and Applications VIII, 98361Y (25 May 2016); https://doi.org/10.1117/12.2222887
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Robotics

Analytical research

Visualization

Sensors

Detection and tracking algorithms

Human vision and color perception

Back to Top