To solve the problem of obtaining a higher accuracy at the expense of redundant models, we propose a network architecture. We utilize a lightweight network that retains the high-precision advantage of the transformer and effectively combines it with convolutional neural network. By greatly reducing the training parameters, this approach achieves high precision, making it well suited for deployment on edge devices. A detail highlight module (DHM) is added to effectively fuse information from multiple scales, making the depth of prediction more accurate and clearer. A dense geometric constraints module is introduced to recover accurate scale factors in autonomous driving without additional sensors. Experimental results demonstrate that our model improves the accuracy from 98.1% to 98.3% compared with Monodepth2, and the model parameters are reduced by about 80%.
Self-supervised depth estimation has achieved remarkable results in sunny weather. However, in the foggy scenes, their performance is limited because of the low contrast and limited visibility caused by the fog. To address this problem, an end-to-end feature separation network for self-supervised depth estimation of fog images is proposed. We take paired clear and synthetic foggy images as input, separate the image information into interference information (illumination, fog, etc.) and invariant information (structure, texture, etc.) by a feature extractor with orthogonality loss. The invariant information is used to estimate depth. Meanwhile, similarity loss is introduced to constrain the fog image depth using the depth of the clear image as a pseudo-label, and an attention module and reconstruction loss are added to refine the output depth, so that better depth maps can be obtained. Then, real-world fog images are used for fine-tuning, which effectively reduces the domain gap between synthetic data and real data. Experiments show that our approach produces advanced results on both synthetic datasets and Cityscape datasets, demonstrating the superiority of our approach.
A three-dimensional profile measurement method based on digital photoelastic fringe analysis technology is proposed in this paper. According to the actual stress field of a disc under appropriate load, the photoelastic fringe patterns are generated. These patterns are illuminated on the reference plane and objects through a projector, which are regarded as the structured-light pattern sequence. Then a series of images including normal images and deformed fringe images are captured. These images contain two significant photoelastic parameters, isoclinic parameter and isochromatic parameter, which could be evaluated by the phase shifting method. Therefore, phase differences can be calculated by photoelastic isochromatic parameter after phase unwrapping. Depth information is carried in the phase differences and virtual 3D profile equal to real objects could be reconstructed. Experiments demonstrate that this method is robust and suitable for measuring objects with regular and general shape.
Hypernumerical aperture and polarized illumination are the key technologies of resolution enhancement of lithography. When the numerical aperture reaches 0.85 and above, especially in the immersion lithography, polarization effect must be taken into consideration. The performance of the projection lens needs to be characterized by rigorous polarization aberration. The vector polarization imaging system that is suitable for hypernumerical aperture is established, and the distortion effects introduced by polarization aberration are analyzed. Orientation Zernike polynomials-based method and Pauli–Zernike polynomials-based method are adopted to parameterize the polarization aberration represented by Jones pupil. Critical dimension error, placement error, and normal int. log slope index are introduced as the index to value imaging distortion. The proposed method and analysis conclusion would provide meaningful guidance for projection lens design of lithographic tools.
For lithographic tools, the forward model of imaging system is repeated many times in the inverse optimization algorithm of optical proximity correction (OPC). Fast and accurate imaging simulation is highly desirable as one of the most critical components in the forward modeling simulations. We have focused on investigating the physical properties of optical imaging in lithography and introduced the method of separation of variables in Mathematical Physics as the fundamental theory to deal with a wide range of process variables. We proposed a rigorous methodology from first principles to speed up image simulations. The proposed imaging formula can be rearranged by two parts, one with only variables, while the remaining part independent with the variables. Simulations for a variety of different process variables confirmed that the proposed method yields a superior quality of image with an accuracy of 10-3 and superior performance of speed. Therefore, the proposed method provides a novel theory and practical means for OPC and other resolution enhancement technologies (RETs) in optical lithography.
The NA of immersion lithography has reached 1.35, in which case polarization effect must be taken into account. The performance of the projection lens should be characterized by polarization aberration. We proposed a polarization aberration measurement theory and method, using the binary grating structure as the mask pattern, with intensity distribution signal as the measuring signal. Pauli Zernike polynomials are adopted to characterizing the polarization aberration, and a linear relationship between intensity signal and Pauli Zernike coefficients was derived. Simulation results show that using the proposed method, the polarization aberration can be reconstructed with relative error of refactoring to 10-2.
Information of lens aberration of lithographic tools is important as it directly affects the intensity distribution in the image plane. Zernike polynomials are commonly used for a mathematical description of lens aberrations. Due to the advantage of lower cost and easier implementation of tools, image based measurement techniques have been widely used. Lithographic tools are typically partially coherent systems that can be described by a bilinear model, which entails time consuming calculations and does not lend a simple and intuitive relationship between lens aberrations and the resulted images. Previous methods for retrieving lens aberrations in such partially coherent systems involve through-focus image measurements and time-consuming iterative algorithms. In this work, we propose a method for aberration measurement in lithographic tools, which only requires measuring two images of intensity distribution. Two linear formulations are derived in matrix forms that directly relate the measured images to the unknown Zernike coefficients. Consequently, an efficient non-iterative solution is obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.