KEYWORDS: Transparency, Cameras, Sensors, 3D modeling, Reconstruction algorithms, Optical engineering, 3D acquisition, 3D image processing, 3D image reconstruction, Image segmentation
Consumer-grade range cameras are widely used in three-dimensional reconstruction. However, the resolution and stability limit the quality of reconstruction, especially with transparent objects. A method is proposed to reconstruct the transparency while improving the reconstruction quality of the indoor scene with a single RGB-D sensor. We propose the method to localize the transparent regions from zero depth and wrong depth. The lost surface of transparency is recovered by modeling the statistics of zero depth, variance, and residual error of signed distance function (SDF) with depth data fusion. The camera pose is first initialized by the error minimization of depth map on the SDF and k-color-frame constraint. The pose then is optimized by the penalized coefficient function, which lowers the weight of voxels with higher SDF error. The method is proved to be valid in localizing the transparent objects and can achieve a more robust camera pose under a complex background.
Transparency reconstruction has been a challenging problem in active 3D reconstruction, due to the abnormal transparency appearance of invalid depth and wrong depth captured by structured light sensor. This paper proposes a novel method to localize and reconstruct transparency in domestic environment with real-time camera tracking. Based on the Sighed Distance Function(SDF), we estimate the camera pose by minimizing residual error of multiple depth images in the voxel grid. We adopt asymmetric voting of invalid depth to curve the transparency in 3D domain. Concerning the wrong depth caused by transparency, we build a local model to investigate the depth oscillation of each voxel between frames. With the fusion of depth data, we can get the point cloud of transparency and achieve a higher-quality reconstruction of an indoor scene simultaneously. We explore a series of experiments using a hand-held sensor. The results validate that our approach can accurately localize the transparent objects and improve their 3D model, and is more robust against the interference of camera dithering and other noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.