Stabilization of atmospheric turbulence-distorted video not only needs to remove turbulence-induced spatiotemporal distortions but also needs to improve the sharpness. Using complex steerable pyramid (CSP), we propose a method to stabilize the turbulence-distorted video. First, each frame is decomposed into multiscale and multidirectional phases and amplitudes using CSP. Second, a lowpass filter is used to suppress the local phase variations to stabilize the spatiotemporal distorted. Next, we improve the sharpness by selecting and fusing the optimal amplitude. Experimental results demonstrate the proposed method can stabilize the video and improve the sharpness simultaneously.
The DAISY descriptor has been widely used in dense stereo matching and scene reconstruction. However, DAISY is vulnerable to similar feature regions because the construction method of DAISY sequentially arranges the description of center and neighbor sample points and does not consider their relationships. To enhance the discriminative power of the local descriptor and accelerate the speed of dense matching and scene reconstruction, we propose a low-dimensional local descriptor. The proposed descriptor is inspired from the local binary pattern (LBP). In image space, LBP describes local detail texture by computing the difference between center and neighbor sample points. We introduce this advantage in scale space to extend the DAISY descriptor and make it more efficient for dense matching similar features in the different regions. On this basis, a two-dimensional discrete cosine transform (2D-DCT) is utilized to reduce the dimensions of the descriptor as well as reduce the computation cost of dense matching and scene reconstruction. Through a variety of experiments on the benchmark laser-scanned ground truth scenes as well as indoor and outdoor scenes, we show the proposed descriptor can get more accurate depth maps and more complete reconstruction results than that of using other common descriptors, and the computational speed is much faster than that of using DAISY.
In this paper, we propose a depth estimation method for multi-view image sequence. To enhance the accuracy of dense matching and reduce the inaccurate matching which is produced by inaccurate feature description, we select multiple matching points to build candidate matching sets. Then we compute an optimal depth from a candidate matching set which satisfies multiple constraints (epipolar constraint, similarity constraint and depth consistency constraint). To further increase the accuracy of depth estimation, depth consistency constraint of neighbor pixels is used to filter the inaccurate matching. On this basis, in order to get more complete depth map, depth diffusion is performed by neighbor pixels’ depth consistency constraint. Through experiments on the benchmark datasets for multiple view stereo, we demonstrate the superiority of proposed method over the state-of-the-art method in terms of accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.