Spectral confocal technology is an important three-dimensional measurement technology with high accuracy and non-contact; however, traditional spectral confocal system usually consists of prisons and several lens whose volume and weight is enormous and heavy, besides, due to the chromatic aberration characteristics of ordinary optical lenses, it is difficult to perfectly focus light in a wide bandwidth. Meta-surfaces are expected to realize the miniaturization of conventional optical element due to its superb abilities of controlling phase and amplitude of wavefront of incident at subwavelength scale, and in this paper, an efficient spectral confocal meta-lens (ESCM) working in the near infrared spectrum (1300nm-2000nm) is proposed and numerically demonstrated. ESCM can focus incident light at different focal lengths from 16.7 to 24.5μm along a perpendicular off-axis focal plane with NA varying from 0.385 to 0.530. The meta-lens consists of a group of Si nanofins providing high polarization conversion efficiency lager than 50%, and the phase required for focusing incident light is well rebuilt by the resonant phase which is proportional to the frequency and the wavelength-independent geometric phase, PB phase. Such dispersive components can also be used in implements requiring dispersive device such as spectrometers.
A novel method is proposed in this paper for light field depth estimation by using a convolutional neural network. Many approaches have been proposed to make light field depth estimation, while most of them have a contradiction between accuracy and runtime. In order to solve this problem, we proposed a method which can get more accurate light field depth estimation results with faster speed. First, the light field data is augmented by proposed method considering the light field geometry. Because of the large amount of the light field data, the number of images needs to be reduced appropriately to improve the operation speed, while maintaining the confidence of the estimation. Next, light field images are inputted into our network after data augmentation. The features of the images are extracted during the process, which could be used to calculate the disparity value. Finally, our network can generate an accurate depth map from the input light field image after training. Using this accurate depth map, the 3D structure in real world could be accurately reconstructed. Our method is verified by the HCI 4D Light Field Benchmark and real-world light field images captured with a Lytro light field camera.
Due to the low cost and easy deployment, the depth estimation of monocular cameras has always attracted attention of researchers. As good performance based on deep learning technology in depth estimation, more and more training models has emerged for depth estimation. Most existing works have required very promising results that belongs to supervised learning methods, but corresponding ground truth depth data for training is inevitable that makes training complicated. To overcome this limitation, an unsupervised learning framework is used for monocular depth estimation from videos, which contains depth map and pose network. In this paper, better results can be achieved by optimizing training models and improving training loss. Besides, training and evaluation data is based on standard dataset KITTI (Karlsruhe Institute of Technology and Toyota Institute of Technology). In the end, the results are shown through comparing with different training models used in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.