As the litho hotspot detection runtime is currently in a continuous increase with sub-10nm technology nodes due to the increase of the design and process complexity, new methods and approaches are needed to improve the runtime while guaranteeing high accuracy rate. Machine-Learning Fast LFD (ML-FLFD) is a new flow that uses a specialized machine learning technique to provide fast and accurate litho hotspot detection. This methodology is based on having input data to train the machine learning model during the model preparation phase. Current ML-FLFD techniques depend on collecting hotspots (HS) and Non hotspots (NHS) data from the drawn layer in order to train the model. In this paper, we present a new technique where we use the retarget data to train the machine learning model instead of using the drawn hotspot data. Using retargeting data is getting one step closer to the actual printed contours which gives a better insight about the hotspots of the manufactured wires during the machine learning model training step. The effect of using closer data to the printed contours will be reflected on both the accuracy and the extra rate which will reduce simulation area. In the different sections of this paper, we will compare the new approach of using retarget data as a ML input to the current technique of using drawn data. Pros and cons of the two approaches will be listed in details including the experimental results of hotspot accuracy and litho simulation area.
At the core of Design-technology co-optimization (DTCO) processes, is the Design Space Exploration (DSE), where different design schemes and patterns are systematically analyzed and design rules and processes are co-optimized for optimal yield and performance before real products are designed. Synthetic layout generation offers a solution. With rules-based synthetic layout generation, engineers design rules to generate realistic layout they will later see in real product designs. This paper shows two approaches to generating full coverage of the design space and providing contextual layout. One approach relies on Monte Carlo methods and the other depends on combining systematic and random methods to core patterns and their contextual layout. Also, in this paper we present a hierarchical classification system that catalogs layouts based on pattern commonality. The hierarchical classification is based on a novel algorithm of creating a genealogical tree of all the patterns in the design space.
Silicon weak pattern exploration becomes more and more attractive for yield improvement and design robustness as these proven silicon weak patterns or hotspots directly reveals process weakness and should be avoided to occur on the chip design. At the very beginning, only a few known hotspot patterns are available as seeds to initialize the weak pattern accumulation process. Machine learning technique can be utilized to expand the weak pattern database, the data volume is critical for machine learning. Fuzzy patterns are built and more potential hotspots locations are found and sent to YE team to confirm, thus more silicon proven data is available for machine learning model training, both good patterns and bad patterns are valuable for the training data set. The trained machine learning model is then used for new hotspots prediction. The outcome from the machine learning prediction need to be validated by silicon data in the first few iterations. When a reliable machine learning model is ready for hotspots detection, designers can run hotspot prediction at the design stage. There are some techniques in training the mode and will be discussed in details in the paper.
As the typical litho hotspot detection runtime continue to increase with sub-10nm technology node due to increasing design and process complexity, many DFM techniques are exploring new methods that can expedite some of their advanced verification processes. The benefit of improved runtimes through simulation can be obtained by reducing the amount of data being sent to simulation. By inserting a pattern matching operation, a system can be designed such that it only simulates in the vicinity of topologies that somewhat resemble hotspots while ignoring all other data. Pattern Matching improved overall runtime significantly. However, pattern matching techniques require a library of accumulated known litho hotspots in allowed accuracy rate. In this paper, we present a fast and accurate litho hotspot detection methodology using specialized machine learning. We built a deep neural network with training from real hotspot candidates. Experimental results demonstrate Machine Learning’s ability to predict hotspots and achieve greater than 90% detection accuracy and coverage, with best achieved accuracy 99.9% while reducing overall runtime compared to full litho simulation.
KEYWORDS: Lithography, Visualization, Manufacturing, New and emerging technologies, Failure analysis, Multilayers, Logic, Reliability, Analytical research
As the IC technology node moves forward, critical dimension becomes smaller and smaller, which brings huge challenge to IC manufacturing. Lithography is one of the most important steps during the whole manufacturing process and litho hotspots become a big source of yield detractors. Thus tuning lithographic recipes to cover a big range of litho hotspots is very essential to yield enhancing. During early technology developing stage, foundries only have limited customer layout data for recipe tuning. So collecting enough patterns is significant for process optimization. After accumulating enough patterns, a general way to treat them is not precise and applicable. Instead, an approach to scoring these patterns could provide a priority and reference to address different patterns more effectively. For example, the weakest group of patterns could be applied the most limited specs to ensure process robustness. This paper presents a new method of creation of real design alike patterns of multiple layers based on design rules using Layout Schema Generator (LSG) utility and a pattern scoring flow using Litho-friendly Design (LFD) and Pattern Matching. Through LSG, plenty of new unknown patterns could be created for further exploration. Then, litho simulation through LFD and topological matches by using Pattern Matching is applied on the output patterns of LSG. Finally, lithographical severity, printability properties and topological distribution of every pattern are collected. After a statistical analysis of pattern data, every pattern is given a relative score representing the pattern’s yield detracting level. By sorting the output pattern score tables, weak patterns could be filtered out for further research and process tuning. This pattern generation and scoring flow is demonstrated on 28nm logic technology node. A weak pattern library is created and scored to help improve recipe coverage of litho hotspots and enhance the reliability of process.
As technology advances, IC designs are getting more sophisticated, thus it becomes more critical and challenging to fix printability issues in the design flow. Running lithography checks before tapeout is now mandatory for designers, which creates a need for more advanced and easy-to-use techniques for fixing hotspots found after lithographic simulation without creating a new design rule checking (DRC) violation or generating a new hotspot. This paper presents a new methodology for fixing hotspots on layouts while using the same engine currently used to detect the hotspots. The fix is achieved by applying minimum movement of edges causing the hotspot, with consideration of DRC constraints. The fix is internally simulated by the lithographic simulation engine to verify that the hotspot is eliminated and that no new hotspot is generated by the new edge locations. Hotspot fix checking is enhanced by adding DRC checks to the litho-friendly design (LFD) rule file to guarantee that any fix options that violate DRC checks are removed from the output hint file. This extra checking eliminates the need to re-run both DRC and LFD checks to ensure the change successfully fixed the hotspot, which saves time and simplifies the designer’s workflow. This methodology is demonstrated on industrial designs, where the fixing rate of single and dual layer hotspots is reported.
As technology advances, the need for running lithographic (litho) checking for early detection of hotspots before tapeout has become essential. This process is important at all levels—from designing standard cells and small blocks to large intellectual property (IP) and full chip layouts. Litho simulation provides high accuracy for detecting printability issues due to problematic geometries, but it has the disadvantage of slow performance on large designs and blocks [1]. Foundries have found a good compromise solution for running litho simulation on full chips by filtering out potential candidate hotspot patterns using pattern matching (PM), and then performing simulation on the matched locations. The challenge has always been how to easily create a PM library of candidate patterns that provides both comprehensive coverage for litho problems and fast runtime performance. This paper presents a new strategy for generating candidate real design patterns through a random generation approach using a layout schema generator (LSG) utility. The output patterns from the LSG are simulated, and then classified by a scoring mechanism that categorizes patterns according to the severity of the hotspots, probability of their presence in the design, and the likelihood of the pattern causing a hotspot. The scoring output helps to filter out the yield problematic patterns that should be removed from any standard cell design, and also to define potential problematic patterns that must be simulated within a bigger context to decide whether or not they represent an actual hotspot. This flow is demonstrated on SMIC 14nm technology, creating a candidate hotspot pattern library that can be used in full chip simulation with very high coverage and robust performance.
Due to limited availability of DRC clean patterns during the process and RET recipe development, OPC recipes are not tested with high pattern coverage. Various kinds of pattern can help OPC engineer to detect sensitive patterns to lithographic effects. Random pattern generation is needed to secure robust OPC recipe. However, simple random patterns without considering real product layout style can’t cover patterning hotspot in production levels. It is not effective to use them for OPC optimization thus it is important to generate random patterns similar to real product patterns. This paper presents a strategy for generating random patterns based on design architecture information and preventing hotspot in early process development stage through a tool called Layout Schema Generator (LSG). Using LSG, we generate standard cell based on random patterns reflecting real design cell structure – fin pitch, gate pitch and cell height. The output standard cells from LSG are applied to an analysis methodology to assess their hotspot severity by assigning a score according to their optical image parameters - NILS, MEEF, %PV band and thus potential hotspots can be defined by determining their ranking. This flow is demonstrated on Samsung 7nm technology optimizing OPC recipe and early enough in the process avoiding using problematic patterns.
In order to resolve the causality dilemma of which comes first, accurate design rules or real designs, this paper presents a flow for exploration of the layout design space to early identify problematic patterns that will negatively affect the yield.
A new random layout generating method called Layout Schema Generator (LSG) is reported in this paper, this method generates realistic design-like layouts without any design rule violation. Lithography simulation is then used on the generated layout to discover the potentially problematic patterns (hotspots). These hotspot patterns are further explored by randomly inducing feature and context variations to these identified hotspots through a flow called Hotspot variation Flow (HSV). Simulation is then performed on these expanded set of layout clips to further identify more problematic patterns.
These patterns are then classified into design forbidden patterns that should be included in the design rule checker and legal patterns that need better handling in the RET recipes and processes.
In this paper, we introduce a fast and reasonably accurate methodology to determine patterning difficulty based on the fundamentals of optical image processing techniques to analyze the frequency content of design shapes which determines patterning difficulties via a computational patterning transfer function. In addition, with the help of Monte- Carlo random pattern generator, we use this flow to identify a set of difficult patterns that can be used to evaluate the design ease-of-manufacturability via a scoring methodology as well as to help with the optimization phases of post-tape out flows. This flow offers the combined merits of scoring-based criteria and model-based approach for early designs. The value of this approach is that it provides designers with early prediction of potential problems even before the rigorous model-based DFM kits are developed. Moreover, the flow establishes a bi-directional platform for interaction between the design and the manufacturing communities based on geometrical patterns.
KEYWORDS: Defect detection, New and emerging technologies, Process modeling, Design for manufacturing, Silicon, Design for manufacturability, Particles, Visualization, Lithography
Multiple-Patterning Technology (MPT) is still the preferred choice over EUV for the advanced technology nodes, starting the 20nm node. Down the way to 7nm and 5nm nodes, Self-Aligned Multiple Patterning (SAMP) appears to be one of the effective multiple patterning techniques in terms of achieving small pitch of printed lines on wafer, yet its yield is in question. Predicting and enhancing the yield in the early stages of technology development are some of the main objectives for creating test macros on test masks. While conventional yield ramp techniques for a new technology node have relied on using designs from previous technology nodes as a starting point to identify patterns for Design of Experiment (DoE) creation, these techniques are challenging to apply in the case of introducing an MPT technique like SAMP that did not exist in previous nodes.
This paper presents a new strategy for generating test structures based on random placement of unit patterns that can construct more meaningful bigger patterns. Specifications governing the relationships between those unit patterns can be adjusted to generate layout clips that look like realistic SAMP designs. A via chain can be constructed to connect the random DoE of SAMP structures through a routing layer to external pads for electrical measurement. These clips are decomposed according to the decomposition rules of the technology into the appropriate mandrel and cut masks. The decomposed clips can be tested through simulations, or electrically on silicon to discover hotspots.
The hotspots can be used in optimizing the fabrication process and models to fix them. They can also be used as learning patterns for DFM deck development. By expanding the size of the randomly generated test structures, more hotspots can be detected. This should provide a faster way to enhance the yield of a new technology node.
Achieving lithographic printability at advanced nodes (14nm and beyond) can impose significant restrictions on physical design, including large numbers of complex design rule checks (DRC) and compute-intensive detailed process model checking. Early identifying of yield-limiter hotspots is essential for both foundries and designers to significantly improve process maturity. A real challenge is to scan the design space to identify hotspots, and decide the proper course of action regarding each hotspot. Building a scored pattern library with real candidates for hotspots for both foundries and designers is of great value. Foundries are looking for the most used patterns to optimize their technology for and identify patterns that should be forbidden, while designers are looking for the patterns that are sensitive to their neighboring context to perform lithographic simulation with their context to decide if they are hotspots or not.[1] In this paper we propose a framework to data mine designs to obtain set of representative patterns of each design, our aim is to sample the designs at locations that can be potential yield limiting. Though our aim is to keep the total number of patterns as small as possible to limit the complexity, still the designer is free to generate layouts results in several million of patterns that define the whole design space. In order to handle the large number of patterns that represent the design building block constructs, we need to prioritize the patterns according to their importance. The proposed pattern classification methodology depends on giving scores to each pattern according to the severity of hotspots they cause, the probability of their presence in the design and the likelihood of causing a hotspot. The paper also shows how the scoring scheme helps foundries to optimize their master pattern libraries and priorities their efforts in 14nm technology and beyond. Moreover, the paper demonstrates how the hotspot scoring helps in improving the runtime of lithographic simulation verification by identifying which patterns need to be optimized to correctly describe candidate hotspots, so that only potential problematic patterns are simulated.
Sub-20nm node designs are getting more sophisticated, and printability issues become more critical which need more advanced techniques to fix. It is mandatory for designers to run lithography checks before tapeout, and it is very challenging to fix all of the generated hotspots manually without introducing unintentional hotspots, or DPT violations. This paper presents a methodology for fixing hotspots on DPT layouts, using the same Model Based Hints (MBH) engine used for detecting hotspots. The fix is based on DRC and DPT constrained minimum movement of edges causing the hotspot, which guarantees that the fix does not violate any of the specified DRC or DPT constraints, nor does it need recoloring. The fix is extended along multilayers to fulfill the specified DRC and DPT constraints and guarantees circuit connectivity along the layers stack. This multilayers approach fixes hotspots that were impossible to fix previously. This methodology is demonstrated on industrial designs, where real hotspots were fixed and the fixing rate is reported.
Integrated circuits suffer from serious layout printability issues associated to the lithography manufacturing process. Regular layout designs are emerging as alternative solutions to help reducing these systematic sub-wavelength lithography variations. From CAD point of view, regular layouts can be treated as repeated patterns that are arranged in arrays. In most modern mask synthesis and verification tools, cell based hierarchical processing has been able to identify repeating cells by analyzing the design’s cell placement; however, there are some routing levels which are not inside the cell and yet they create an array-like structure because of the underlying topologies which could be exploited by detecting repeated patterns in layout thus reducing simulation run-time by simulating only the representing cells and then restore all the simulation results in their corresponding arrays. The challenge is to make the array detection and restoration of the results a very lightweight operation to fully realize the benefits of the approach. A novel methodology for detecting repeated patterns in a layout is proposed. The main idea is based on translating the layout patterns into string of symbols and construct a “Symbolic Layout”. By finding repetitions in the symbolic layout, repeated patterns in the drawn layout are detected. A flow for layout reduction based on arrays-detection followed by pattern-matching is discussed. Run time saving comes from doing all litho simulations on the base-patterns only. The pattern matching is then used to restore all the simulation results over the arrays. The proposed flow shows 1.4x to 2x run time enhancement over the regular litho simulation flow. An evaluation for the proposed flow in terms of coverage and run-time is drafted.
As technology nodes scale beyond 20nm node, design complexity increases and printability issues become more critical and hard for RET techniques to fix. It is now mandatory for designers to run lithography checks prior to tape out and acceptance by the foundry. As lithography compliance became a sign-off criterion, lithography hotspots are increasingly treated like DRC violations. In the case of lithography hotspot, layout edges that should be moved to fix the hotspot are not necessarily the edges directly touching it. As a result of that, providing the designer with a suggested layout movements to fix the lithography hotspot is becoming a necessity. Software solutions generating hints should be accurate and fast. In this paper we are presenting a methodology for providing hints to the designers to fix Litho-hotspots in the 20nm and beyond.
The need to quickly and flexibly characterize the design manufacturability increases as circuit design scales beyond the
22nm node. Improvements in design practices and design software are enabling this process. The use of carefully
characterized design subunits (cells) in the general assembly of the chip is one way to ensure that products are optimized
for these increasingly difficult lithographic process challenges. Additionally, software for assessing design robustness
has been enhanced to deal with ever more complex resolution enhancement techniques. State of the art simulator and
verification tool sets provide the necessary step of creating simulation contours and process variability bands upon which
various checks can then be performed. The construction of these contours and bands is often hidden from the user as traditional single or double exposure processes of one or two masks are assumed to be used to create the final layout pattern.
KEYWORDS: Tolerancing, Lithography, Visualization, Electroluminescence, Roads, Design for manufacturing, Photomasks, Calibration, Design for manufacturability, Current controlled current source
This paper presents an approach for compressing litho hotspot pattern library that complies with general purpose
pattern matching engine (GPPME). This approach incorporates two techniques to achieve optimal pattern reduction.
The first technique excludes polygons outside the optical diameter to reduce numerical noise related to a square
ambit which artificially may affect a hotspot location. The second technique determines the common geometrical
structures between patterns and inserts adaptive edge tolerance constraints for each individual pattern. The
performance of the resulting compressed patterns is then compared to that of running the complete library of exact
matches using an optimized exact pattern matching engine (OEPME).
The results indicate that compression rates giving number of compressed patterns in the order of hundreds can
achieve better performance than running an optimized exact pattern matcher for the whole library while maintaining
the original quality of results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.