KEYWORDS: Data processing, Data integration, Data modeling, Video, 3D modeling, Convolution, Cameras, Performance modeling, Error analysis, Motion estimation
This paper presents a human pose estimation method for martial arts video analysis using a Semantic Graph Convolutional Network (SemGCN) instead of an ordinary convolutional neural network (CNN). The inputs for the model are videos from the Human3.6M dataset, in addition to the ones from Martial Arts, Dancing and Sports (MADS) dataset. A data unification process is described so that MADS joints can be adapted to the Human3.6M base setting. The performance of the model when only uses Human3.6M for training is compared to training with both Human3.6M and MADS datasets, resulting in a lower mean per-joint position error (MPJPE) for the latter. Finally, performance indicators such as the vertical position of the center of mass, balance and stability, are calculated for the MADS sequences in order to provide insights regarding martial arts execution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.