In this work, we explore the possibility of using synthetically generated data for video-based gesture recognition with large pre-trained models. We consider whether these models have sufficiently robust and expressive representation spaces to enable “training-free” classification. Specifically, we utilize various state-of-the-art video encoders to extract features for use in k-nearest neighbors classification, where the training data points are derived from synthetic videos only. We compare these results with another training-free approach— zero-shot classification using text descriptions of each gesture. In our experiments with the RoCoG-v2 dataset, we find that using synthetic training videos yields significantly lower classification accuracy on real test videos compared to using a relatively small number of real training videos. We also observe that video backbones that were fine-tuned on classification tasks serve as superior feature extractors, and that the choice of fine-tuning data has a substantial impact on k-nearest neighbors performance. Lastly, we find that zero-shot text-based classification performs poorly on the gesture recognition task, as gestures are not easily described through natural language.
Effective communication and control of a team of humans and robots is critical for a number DoD operations and scenarios. In an ideal case, humans would communicate with the robot teammates using nonverbal cues (i.e., gestures) that work reliably in a variety of austere environments and from different vantage points. A major challenge is that traditional gesture recognition algorithms using deep learning methods require large amounts of data to achieve robust performance across a variety of conditions. Our approach focuses on reducing the need for “hard-to-acquire” real data by using synthetically generated gestures in combination with synthetic-to-real domain adaptation techniques. We also apply the algorithms to improve the robustness and accuracy of gesture recognition from shifts in viewpoints (i.e., air to ground). Our approach leverages the soon-to-be released dataset called Robot Control Gestures (RoCoG-v2), consisting of corresponding real and synthetic videos from ground and aerial viewpoints. We first demonstrate real-time performance of the algorithm running on low-SWAP, edge hardware. Next, we demonstrate the ability to accurately classify gestures from different viewpoints with varying backgrounds representative of DoD environments. Finally, we show the ability to use the inferred gestures to control a team of Boston Dynamic Spot robots. This is accomplished using inferred gestures to control the formation of the robot team as well as to coordinate the robot’s behavior. Our expectation is that the domain adaptation techniques will significantly reduce the need for real-world data and improve gesture recognition robustness and accuracy using synthetic data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.