With the adoption of extreme ultraviolet (EUV) lithography for high-volume production of advanced nodes, stochastic variability and resulting failures, both post litho and post etch, have drawn increasing attention. There is a strong need for accurate models for stochastic edge placement error (SEPE) with a direct link to the induced stochastic failure probability (FP). Additionally, to prevent stochastic failure from occurring on wafers, a holistic stochastic-aware computational lithography suite of products is needed, such as stochastic-aware mask source optimization (SMO), stochastic-aware optical proximity correction (OPC), stochastic-aware lithography manufacturability check (LMC), and stochastic-aware process optimization and characterization. In this paper, we will present a framework to model both SEPE and FP. This approach allows us to study the correlation between SEPE and FP systematically and paves the way to directly correlate SEPE and FP. Additionally, this paper will demonstrate that such a stochastic model can be used to optimize source and mask to significantly reduce SEPE, minimize FP, and improve stochastic-aware process window. The paper will also propose a flow to integrate the stochastic model in OPC to enhance the stochastic-aware process window and EUV manufacturability.
Driving down imaging-induced edge placement error (EPE) is a key enabler of semiconductor technology node scaling1-3. From the 5 nm node forward, stochastic edge placement error (SEPE) is predicted to become the biggest contributor to total edge placement error. Many previous studies have established that LER, LCDU, and similar variability measurements require corrections for metrology artifacts and noise as well as mask variability transfer to more accurately represent wafer-level stochastic variability. In this presentation, we will discuss SEPE band behavior based on a methodology that allows local extraction of SEPE from total measured local variability (LEPU) in a generalized way along 2D contours.
With the adoption of extreme ultraviolet (EUV) lithography for high volume production in the advanced wafer manufacturing fab, defects resulting from stochastic effects could be one of major yield killers and draw increasing interest from the industry. In this paper, we will present a flow, including stochastic edge placement error (SEPE) model calibration, pattern recognition and hot spot ranking from defect probability, to detect potential hot spot in the chip design. The prediction result shows a good match with the wafer inspection. HMI eP5 massive metrology and contour analysis were used to extract wafer statistical edge placement distribution data.
In recent years, compact modeling of negative tone development (NTD) resists has been extensively investigated. Specific terms have been developed to address typical NTD effects, such as aerial image intensity dependent resist shrinkage and development loading. The use of photo decomposable quencher (PDQ) in NTD resists, however, brings extra challenges arising from more complicated and mixed resist effect. Due to pronounced effect of photoacid and base diffusion, the NTD resist with PDQ may exhibit opposite iso-dense bias trend compared with normal NTD resist. In this paper, we present detailed analysis of physical effects in NTD resist with PDQ, and describe respective terms to address each effect. To decouple different effects and evaluate the impact of individual terms, we identify a certain group of patterns that are most sensitive to specific resist effect, and investigate the corresponding term response. The results indicate that all the major resist effect, including PDQ-enhanced acid/base diffusion, NTD resist shrinkage and NTD development loading can be well captured by relevant terms. Based on these results, a holistic approach for the compact model calibration of NTD resist with PDQ can be established.
Classical SEM metrology, CD-SEM, uses low data rate and extensive frame-averaging technique to achieve high-quality SEM imaging for high-precision metrology. The drawbacks include prolonged data collection time and larger photoresist shrinkage due to excess electron dosage. This paper will introduce a novel e-beam metrology system based on a high data rate, large probe current, and ultra-low noise electron optics design. At the same level of metrology precision, this high speed e-beam metrology system could significantly shorten data collection time and reduce electron dosage. In this work, the data collection speed is higher than 7,000 images per hr. Moreover, a novel large field of view (LFOV) capability at high resolution was enabled by an advanced electron deflection system design. The area coverage by LFOV is >100x larger than classical SEM. Superior metrology precision throughout the whole image has been achieved, and high quality metrology data could be extracted from full field. This new capability on metrology will further improve metrology data collection speed to support the need for large volume of metrology data from OPC model calibration of next generation technology. The shrinking EPE (Edge Placement Error) budget places more stringent requirement on OPC model accuracy, which is increasingly limited by metrology errors. In the current practice of metrology data collection and data processing to model calibration flow, CD-SEM throughput becomes a bottleneck that limits the amount of metrology measurements available for OPC model calibration, impacting pattern coverage and model accuracy especially for 2D pattern prediction. To address the trade-off in metrology sampling and model accuracy constrained by the cycle time requirement, this paper employs the high speed e-beam metrology system and a new computational software solution to take full advantage of the large volume data and significantly reduce both systematic and random metrology errors. The new computational software enables users to generate large quantity of highly accurate EP (Edge Placement) gauges and significantly improve design pattern coverage with up to 5X gain in model prediction accuracy on complex 2D patterns. Overall, this work showed >2x improvement in OPC model accuracy at a faster model turn-around time.
A heuristic optimization approach has been developed to optimize SRAF (sub resolution assist feature) placement rules for advanced technology nodes by using a genetic algorithm. This approach has demonstrated the capability to optimize a rule-based SRAF (RBSRAF) solution for both 1D and 2D designs to improve PVBand and avoid SRAF printing. Compared with the MBSRAF based POR (process of record) solution, the optimized RBSRAF can produce a comparable PVBand distribution for a full chip test case containing both random SRAM and logic designs with a significant 65% SRAF generation time reduction and 55% total OPC time reduction.
In this paper, genetic algorithm (GA) method is applied to both positive and negative Sub Resolution Assist Features
(SRAF) insertion rules. Simulation results and wafer data demonstrated that the optimized SRAF rules helped resolve
the SRAF printing issues while dramatically improving the process window of the working layer. To find out the best
practice to place the SRAF, model-based SRAF (MBSRAF), rule-based SRAF (RBSRAF) with pixelated OPC
simulation and RBSRAF with GA method are thoroughly compared. The result shows the apparent advantage of
RBSRAF with GA method.
This paper extends the state of the art by demonstrating performance improvements in the Domain
Decomposition Method (DDM) from a physical perturbation of the input mask geometry. Results from four
testcases demonstrate that small, direct modifications in the input mask stack slope and edge location can result in
model calibration and verification accuracy benefit of up to 30%. All final mask optimization results from this
approach are shown to be valid within measurement accuracy of the dimensions expected from manufacture. We
highlight the benefits of a more accurate description of the 3D EMF near field with crosstalk in model calibration
and impact as a function of mask dimensions. The result is a useful technique to align DDM mask model accuracy
with physical mask dimensions and scattering via model calibration.
With ever shrinking critical dimensions, half nm OPC errors are a primary focus for process improvement in computational lithography. Among many error sources for 2x and 1x nodes, 3D mask modeling has caught the attention of engineers and scientists as a method to reduce errors at these nodes. While the benefits of 3D mask modeling are well known, there will be a runtime penalty of 30-40% that needs to be weighed against the benefit of optical model accuracy improvements. The economically beneficial node to adopt 3D mask modeling has to be determined by balancing these factors. In this paper, a benchmarking study has been conducted on a 20nm cut mask, metal and via layers with two different computational lithography approaches as compared with standard thin-mask approximation modeling. Besides basic RMS error metrics for model calibration and verification, through pitch and through size optical proximity behavior, through focus model predictability, best focus prediction and common DOF prediction are thoroughly evaluated. Runtime impact and OPC accuracy are also studied.
Among the valid gate pattern strategies for the 65nm technology node, att-PSM offers the advantage in cost and mask complexity over other contenders such as complimentary alt-PSM and chromeless phase lithography (CPL). A combination of Quasar illumination and sub-resolution assist features (SRAFs) provides a through pitch solution with a common depth of focus (DOF) better than 0.25um to support the aggressive scaling in both logic and high density SRAM. A global mask-source optimization scheme is adopted to explore the multi-dimensional space of process parameters and define the best overall solution that includes scanner optics such as NA and illumination, and SRAF placement rules for 1-dimensional line and space patterns through the full pitch range. Gate pattern capabilities in terms of DOF, exposure latitude, mask error enhancement factor (MEEF), optical proximity correction (OPC), CD control, and aberration sensitivity are reported in this paper. Conflict resolution and placement optimization are key to the success of implementation of SRAF to the complex 2-dimensional layouts of random logic. Reasonable CD control can be achieved based on the characterization and simulation of CD variations at different spatial and processing domains from local to across chip, across wafer, wafer-to-wafer, and lot-to-lot. Certain layout restrictions are needed for high performance devices which require a much tighter gate CD distribution. Scanner optimization and enhancement such as DoseMapper are key enablers for such aggressive CD control. The benefits, challenges, and possible extensions of this particular approach are discussed in comparison with other techniques.
Gate CD control is crucial to transistor fabrication for advanced technology nodes at and beyond 65 nm. ACLV (across chip linewidth variation) has been identified as a major contributor to overall CD budget for low k1 lithography. In this paper, we present a detailed characterization of ACLV performance on the latest ASML scanner using Texas Instruments proprietary scatterometer based lens fingerprinting technique (ScatterLith). We are able to decompose a complex ACLV signature including patterns placed in both vertical and horizontal directions and trace the CD errors back to various scanner components such as lens aberrations, illumination source shape, dynamic image field, and scan synchronization. Lithography simulation plays an important role in bringing together the wafer and tool metrology for direct correlation and providing a quantitative understanding of pattern sensitivity to lens and illuminator errors for a particular process setup. A new ACLV characterization methodology is enabled by combining wafer metrology ScattereLith, scanner metrology and lithography simulation. Implementation of this methodology improves tool-to-tool matching and control on ACLV and V-H bias across multiple scanners to meet tight yield and speed requirements for advanced chip manufacturing.
Lens spherical error is an important lens aberration used to characterize lens quality and also has a significant contribution to across chip line width variation (ACLV). It also impacts tool-to-tool matching efforts especially when the optical lithography approaches sub-half wavelength geometry. Traditionally, spherical error is measured by using CD SEM with known drawbacks of poor accuracy and long cycle time. At Texas Instruments, an in-house scatterometer-based lens fingerprinting technique (ScatterLith) performs this tedious job accurately and quickly. This paper presents across slit spherical aberration signatures for ArF scanners collected using this method. The technique can successfully correlate these signatures with Litel lens aberration data and Nikon OCD data for spherical aberration errors as small as 10mλ. ACLV contributions from such small spherical errors can be quantified using this method. This provides the lithographer with an important tool to evaluate, qualify and match advanced scanners to improve across chip line width variation control.
As IC density shrinks based on Moore’s law, optical lithography continually is scaled to print ever-smaller features by using resolution enhancement techniques such as phase shift mask, optical proximity correction (OPC), off-axis illumination and sub-resolution assistant features. OPC has been playing a key role to maximize the overlapping process window through pitch in the sub-wavelength optical lithography. As an important cost control measure, one general OPC model is applied to the full exposure field across multiple scanners. To implement this technique, optical proximity matching of line width across the field and across multiple tools turns out to be very crucial particularly at gate pattern. In addition, it is very important to obtain reliable critical dimension (CD) data sets with low noise level and high accuracy from the metrology tool. Otherwise, extracting the real scanner fingerprint in term of CD can not be achieved with precision in the order of 1nm~2nm. Scatterometry CD measurements have demonstrated excellent results to overcome this problem. The methodology of Scatterometry is emerging as one of the best metrology tool candidates in terms of gate line width control for technology nodes beyond 130nm.
This paper investigates the sources of error that consume the CD budget of optical proximity matching for line through pitch (LTP). The study focuses on the 130nm technology node and uses experimental data and Prolith resist vector model based simulations. Scatterometer CD measurements of LTP are used for the first time and effectively correlated to lens aberrations and effective partial coherence (EPC) measurements which were extracted by Litel In-situ Interferometer (ISI) and Source Metrology Instrument (SMI). Implications of optical proximity matching are also discussed for future technology nodes. From the results, the paper also demonstrates the efficacy of scatterometer line through pitch measurements for OPC characterization.
The ability to accurately, quickly and automatically fingerprint the lenses of advanced lithography scanners has always been a dream for lithographers. This is truly necessary to understand error sources of ACLV, especially when the optical lithography is pushed into 130 nm regimes and beyond. This dream has become a reality at Texas Instruments with the help of scatterometry. This paper describes the development and characterization of the scatterometer based scanner lens testing technique (ScatterLith) and its application in 193 nm and 248 nm scanner lens fingerprinting. The entire procedure includes a full field exposure through focus in a micro stepping mode, scatterometer measurement of focus matrix, image field analysis and mapping of lens curvature, astigmatism, spherical aberration, line-through pitch analysis and ACLV analysis (i.e. across chip line width variation). ACLV has been directly correlated with image field deviation, lens aberration and illumination source errors. Examples are given to illustrate its applications in accurate focus monitoring with enhanced capability of dynamic image field and lens signature mapping for the latest ArF and KrF scanners used in manufacturing environment for 130nm node and beyond. Analysis of CD variation across a full scanner field is done through a step-by-step image field correction procedure. ACLV contribution of each image field error can be quantified separately. The final across slit CD signature is further analyzed against possible errors from illumination uniformity, illumination pupil fill, and higher order projection lens aberrations. High accuracy and short cycle time make this new technique a very effective tool for in-line real time monitoring and scanner qualification. Its fingerprinting capability also provides lithography engineers a comprehensive understanding of scanner performance for CD control and tool matching. Its extendibility to 90nm and beyond is particularly attractive for future development and manufacturing requirements.
A detailed characterization of across chip line width variation (ACLV) has been carried out on the latest Nikon scanners with a combination of advanced metrology techniques in Texas Instruments, including scatterometer-based image field and CD fingerprinting, lens aberrations measurement using a Litel in-situ interferometer, and illumination source imaging with a pin-hole camera. This paper describes the application of the above techniques in our investigation of the root causes for pattern CD bias between vertical and horizontal features. Illumination source radiance distribution is found sometimes to have a significant impact on V-H bias and the final overall ACLV on production wafers. Examples are given to demonstrate a comprehensive methodology that is used to quantitatively break down the overall CD errors and correlate them back to the basic optical and imaging components. It is shown through pupil-gram analysis that the ellipticity in partial coherence is typically within 1+/-1% for conventional illuminations settings on the advanced Nikon scanners while the uneven radiance distribution across the source plays a major role in V-H pattern CD bias. For scanners with low and uniform lens coma aberrations, the V-H bias after removing the contribution from image field errors is found to follow a linear relationship with the source radiance non-uniformity described also in terms of ellipticity. It is shown that radiance ellipticity is a bigger concern for off-axis illuminators. Tighter design rules patterned with off-axis illumination are more vulnerable to source radiance non-uniformity as well as lens aberrations. Illuminator induced V-H bias across the slit is compared to the signature caused by lens aberrations specifically uneven x,y-coma. Implications to exposure tool specification, control, and matching are further explored through experiments and lithography simulation for the current 130nm production and the future technology nodes in development.
A quick, accurate, automatic and robust method to evaluate the best focus and lens quality of the advanced lithography tools is highly demanded when the optical lithography is pushed into 130 nm regimes and beyond. This paper presents how this tedious lithographer's daily job has been performed in Texas Instrument in a more pleasant way thanks to scatterometery. The widely used critical dimension (CD) SEM measurement and +/-10% golden rule have been experiencing in great difficulties to define the optimal process conditions. CD only is not capable to fully describe the resist profile. Lithographers must consider all resist profile parameters such as sidewall angle, resist height and linewidth of resist profile which can be quickly measured by scatterometer. Across exposure field variation, another key process sensitive parameter, has to be integrated into the decision-making loop of process optimization. A new parameter (DCAT ratio) has been introduced and defined as a function of those process sensitive parameters. It has a clear maximum and zero first derivative point (that is, a preferred parabolic bell shape) at the best process condition. The DCAT ratio has been used to find the true best focus offset for multiple scanners to guide tool-to-tool focus matching. It has been used to qualify scanners, optimize lithography process and determine the exposure latitude.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.