Vision systems have become a promising feedback sensor for robot navigation due to their ability to extract meaningful scene information. In this work, a multicamera system is proposed to estimate the position and orientation of an omnidirectional robot. For this, three calibrated devices (two smartphones and a webcam) are employed. Also, two badges of different colors are placed on the omnidirectional robot to detect its position and orientation. The obtained pose information is used as feedback for the robot trajectory controller. The results show that the proposed system is a useful alternative for the visual localization of ground mobile robots.
Nowadays, computer vision is an essential part of modern autonomous mobile robots. Fisheye cameras are employed to capture large scenes with a single camera, but the hard radial distortion limits the accuracy of measurements. In this research, a vision system with multiple low-distortion cameras to capture large flat scenes from different viewpoints is proposed. This system applies a homography-based image mosaicing method and linear image interpolation. The obtained results show that the proposed system is useful for visual navigation of ground mobile robots.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.