We present a sensor fusion framework for real-time tracking applications combining inertial sensors with a camera.
In order to make clear how to exploit the information in the inertial sensor, two different fusion models gyroscopes only
model and accelerometers model are presented under extended Kalman filter framework. Gyroscopes only model uses
gyroscopes to support the vision-based tracking without considering acceleration measurements. Accelerometers model
utilizes both measurements from the gyroscopes, accelerometers and vision data to estimate the camera pose, velocity,
acceleration and sensor biases. Synthetic data and real image experimental sequences show dramatic improvements in
tracking stability and robustness of estimated motion parameters for gyroscope model, when the accelerometer
measurements exist drift.
Mixed reality technologies have been studied for many years, which now can be applied in many aspects of our daily life.
Generally, appropriate display devices and registration methods are the key factors for the successful application of the
mixed reality system. In the past decade, various types of display systems have been developed at Beijing Institute of
Technology, and many of them have been successfully employed in different mixed reality applications. In this paper,
we give a brief introduction to the various display systems and their corresponding tracking approach developed and
realized at Beijing Institute of Technology for mixed reality applications. These technologies include an interactive
projection system based on motion detection, a fixed-position viewing system, an ultra-light wide-angle head-mounted
display system and a volumetric 3D display system.
A real-time camera tracking algorithm using natural features for augmented reality applications is proposed. The system
relied on the passive vision techniques to obtain the camera pose online. A limited number of calibrated key-frames and
a rough 3D model of the part of the real environment were required. Accurate camera tracking could be achieved by
matching inputting image and the key-frame, whose viewpoint was as close as possible to the current one. Wide baseline
correspondence problem was solved by rendering intermediate image. Previous frames information was applied for jitter
correction. Algorithm performance was tested by real image sequences. Experimental results demonstrated that our
registration algorithm not only was accurate and robust, but also could handle significant aspect changes.
The most promising aspect of augmented reality lies in its ability to integrate the virtual world of the computer with the real world of the user. Namely, users can interact with the real world subjects and objects directly. This paper presents an experimental augmented reality system with a video see-through head-mounted device to display visual objects, as if they were lying on the table together with real objects. In order to overlay virtual objects on the real world at the right position and orientation, the accurate calibration and registration are most important. A vision-based method is used to estimate CCD external parameters by tracking 4 known points with different colors. It achieves sufficient accuracy for non-critical applications such as gaming, annotation and so on.
Nowadays one of the key problems that influence the performance of an AR (Augmented Reality) system is the registration error. It is common that in the current AR systems a virtual object appears to swim about as the user moves, and often does not appear to rest at the same location when viewed from different directions. In order to provide a stable tracking result for our AR application, a hybrid tracking scheme that combines the robustness of the magnetic tracking and the static accuracy of the vision based tracking is developed. The principle of the vision-based tracking is presented and the tracking accuracy of the rotation angle is studied. A magnetic tracker composed of magnetoresistive sensors and accelerometers is proposed to compensate the shortcomings of the vision-based tracking. The algorithm to calculate the position and orientation of the tracked object by combining the calculation result of the magnetic tracking and the vision-based tracking is analyzed. The setup and the experimental results of the proposed AR system are given. The results validate the feasibility of the proposed AR system.
A basic concept of an advanced safety vehicle, which will emerge in the next century, is introduced in this paper, with the emphasis on the active obstacle avoidance system. A prototype system is put forward, in which a scanning laser radar is used for detecting ahead objects, the surrounding information can be understood from the radar data, and therefore once the serious situation emerges, the drive will be warned, or the auto-control system will operate according to the analysis of the surrounding condition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.