Paper
24 October 2017 Object tracking via Spatio-Temporal Context learning based on multi-feature fusion in stationary scene
Author Affiliations +
Proceedings Volume 10462, AOPC 2017: Optical Sensing and Imaging Technology and Applications; 104620Z (2017) https://doi.org/10.1117/12.2283058
Event: Applied Optics and Photonics China (AOPC2017), 2017, Beijing, China
Abstract
A robust algorithm is proposed for tracking object in dynamic challenges including illumination change, pose variation, and occlusion in stationary scene. To cope with these factors, the Spatio-Temporal Context learning based on Multifeature (MSTC) is integrated within a fusion framework. Different from the original Spatio-Temporal Context learning (STC) algorithm which exploits the low-level features (i.e. image intensity and position) from the target and its surrounding regions, our approach utilize the high-level features like Histogram of Oriented Gradient (HOG) and low-level features for tracker interaction and selection for robust tracking performance in decision level. Experimental results on benchmark datasets demonstrate that the proposed algorithm performs robustly and favorably against the original algorithm.
© (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Yunfei Cheng and Wu Wang "Object tracking via Spatio-Temporal Context learning based on multi-feature fusion in stationary scene", Proc. SPIE 10462, AOPC 2017: Optical Sensing and Imaging Technology and Applications, 104620Z (24 October 2017); https://doi.org/10.1117/12.2283058
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
Back to Top