The intersection of deep learning and programmable logic controllers (PLCs) can lead to innovative applications in automation. One of the exciting application areas are gesture-based control systems for Automated Guided Vehicles (AGVs). AGVs are used in various industries for material handling, logistics, warehouse automation, etc. Traditionally, these vehicles are controlled using predefined routes or remote controls, but with gesture-based control, operators can communicate more naturally and efficiently. The incorporation of YOLO-Pose in YOLO versions 7 and 8 has elevated the YOLO algorithm to a leading tool for creating gesture recognition models. The YOLO algorithm employs convolutional neural networks (CNN) to detect objects in real-time. These latest YOLO models offer significantly improved accuracy, speed, and reduced training times. This paper presents the comparative results of 2D gesture recognition transfer learning models created using the YOLO v5, v7, and v8 models, along with the steps taken to implement the model in a PLC-controlled AGV.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.