The two standard reticle defect inspection methods are die-to-die and die-to-database. The die-to-die inspection method compares images from the two dice on the same reticle to identify any defect. However, the die-to-database inspection method compares images from the reticle with the design data (CAD). The previous year, we built an SEM-based VSB writer classification system for die-to-die inspection that used state-of-the-art deep learning models to identify errors such as shape, position, and dose [1]. Using the deep neural networks and DL-based SEM digital twins [2], we showed better accuracy than the average human expert in classifying SEM-based defects. However, a limitation remained that the DL model wasn’t aware of chrome and glass regions, just from the input SEM. This information is helpful to make better decisions in classifying some typical errors achieving higher accuracy. In the current paper, we improve the accuracy of the existing classifier by enhancing the underlying deep learning model and supplementing it with the recognition of chrome and glass (exposed and unexposed) regions further. We make it possible with yet another DL-based SEM2CAD digital twin to automatically identify exposed/unexposed areas from the SEM and augment manual input by the expert to it. We feed this new information into the SEM classifier that currently takes a reference and error SEM image for more accurate results. In addition, we also built an SEM-based defect classification system for the die-to-database inspection to categorize various types of VSB mask writer defects, which requires defect SEM images and the reference CAD. Using several deep neural network models and digital twins, in this paper, we provide a production-grade system for the VSB writer’s SEM-based defect classification that works for both die-to-die and die-to-database inspection methods.
|