Double patterning presents itself as one of the best candidates for pushing the limits of ArF lithography to 20nm technology node and below. It has the advantage of theoretically decreasing the minimum resolvable pitch by a factor of two, or the improvement of the process window by relaxing the lithographic conditions. Double patterning though has its own complexities. Not only sophisticated algorithms are required to simply split the design into two exposures, but these two exposures have to comply with the design manual rules. The number and the complexity of these rules tend to increase for more compact designs in terms of minimum CD and layout topology which in turns increase the coding burden on engineers to let the splitting code be aware of such numerous rules. In this context, we are proposing a new double patterning flow. It will be shown how the splitting can be done while taking into account numerous design rules. And finally, rules prioritization will be discussed in order to avoid conflicts between them.
As the technology advances, OPC run time turns to be a big concern and a great deal of our efforts is directed towards
speeding up the LITHO operations. In addition, the OPC simulation consistency is sometimes deteriorated which is a
critical issue especially for anchor features. On the other hand, full chip designs usually comprise large arrays of basic
cells, used by OPC engineers to tune OPC recipes, which is evident for instance for memory design and processor chips.
The model based OPC technique is not necessary for such designs provided that the equivalent mask shapes for one cell
of these arrays are already known.
In this work, we introduce a combined approach using model and pattern based OPC. Pattern matching is used to extract
regions from full chips that match the basic designs stored in pre-created libraries. When matching occurs, OPC solution
stored in these libraries is used and populated across matched areas. Special treatment for large array boundaries is
applied due to proximity effects. Model based OPC is used for the rest of the chip. This approach has two main
advantages. First, simulation consistency is greatly improved since the OPC solution for standard cells is priory known.
Also, pattern matching is a DRC based tool and thus it is very fast compared to LITHO operations and hence TAT is
further enhanced.
The OPC verification problems tend to get more complicated in terms of coding complexity and TAT (turnaround time)
increase as the gate length get smaller. A well known example of coding complexity is waivers (OPC verification errors
that are priory known to be safe on silicon) detection and elimination. Potential locations for hot spots extraction as well
could be a time consuming process if executed on full chips. And finally, OPC verification flows run time is sometimes
even larger than OPC runs.
In this work, we introduce the use of pattern matching as a potential solution for many verification flows problems.
Pattern matching offers a great TAT advantage since it is a DRC based process, thus it is much faster than time
consuming LITHO operations. Also, its capability to match geometries directly and operability on many layers
simultaneously eliminates complex SVRF coding from our flows. Firstly, we will use the pattern matching in order not
to run OPC verification on basic designs identified by the OPC engineer to be error free, which is a very useful technique
especially in Memory designs and improves the run time. Then, it will be used to detect waivers, which is hard to code,
while running verification flows and eliminate it from the output, and consequently the reviewer will not be distracted by
it and concentrate on real errors. And finally, it will be used to detect hot spots in a separate very quick run before
standard LITHO verification run which gives the designer/OPC engineer the opportunity to fix design/OPC issues
without waiting for lengthy verification flows, and that in turns further improves TAT.
Process models have been in use for performing proximity corrections to designs for placement on lithography masks for
a number of years. In order for these models to be used they must provide an adequate representation of the process
while also allowing the corrections themselves to be performed in a reasonable computational time. In what is becoming
standard Optical Proximity Correction (OPC), the models used have a largely physical optical model combined with a
largely empirical resist model. Normally, wafer data is collected and fit to a model form that is found to be suitable
through experience. Certain process variables are considered carefully in the calibration process-such as exposure dose
and defocus - while other variables-such film thickness and optical parameter variations are often not considered. As
the semiconductor industry continues to march toward smaller and smaller dimensions-with smaller tolerance to errorwe
must consider the importance of those process variations. In the present work we describe the results of experiments
performed in simulations to examine the importance of many of those process variables which are often regarded as
fixed. We show examples of the relative importance of the different variables.
A persistent problem in verification flows is to eliminate waivers defined as patterns that are known to be safe on silicon
even though they are flagged by the verification recipes. The difficulty of the problem stems from the complexity of these
patterns, and thus, using a standard verification language to describe them becomes very tedious and can deliver
unexpected results. In addition, these patterns have a dynamic nature, hence, updating all production verification recipes to
waive these non critical patterns becomes more time consuming.
In this work, we are presenting a new method to eliminate waivers directly after verification recipes have been executed,
where a new rule file will be generated automatically based on the type of errors under investigation. The core of the
method is based on pattern matching to compare generated errors from verifications runs with a library of pattern waivers.
This flow will eliminate the need to edit any production recipe, and complicated coding will not be required. Finally, this
flow is compatible with most of the technology nodes.
One of the major problems in the RET flow is OPC recipe creation. The existence of numerous parameters to tune and
the interdependence between them complicates the process of recipe optimization and makes it very tedious. There is
usually no standard methodology to choose the initial values for the recipe settings or to determine stable regions of
operation. In fact, parameters are usually optimized independently or chosen to resolve a certain issue for a specific
design without quantifying its effect on the quality of the recipe or how it might affect other designs. Another problem
arises when a quick fix is needed for an old recipe to build new design masks, and this causes the stacking of many
customization statements in the OPC recipe, which in turns increases its complexity. Consequently, the experience of the
developer is highly required to build a good as well as a stable recipe. In this context, simulated annealing is proposed to
optimize OPC recipes. It will be shown how many parameters can be optimized simultaneously and how we can get
insight about the stability of the recipe.
The process of preparing a sample plan for optical and resist model calibration has always been tedious. Not only
because it is required to accurately represent full chip designs with countless combinations of widths, spaces and
environments, but also because of the constraints imposed by metrology which may result in limiting the number of
structures to be measured. Also, there are other limits on the types of these structures, and this is mainly due to the
accuracy variation across different types of geometries. For instance, pitch measurements are normally more accurate
than corner rounding. Thus, only certain geometrical shapes are mostly considered to create a sample plan. In addition,
the time factor is becoming very crucial as we migrate from a technology node to another due to the increase in the
number of development and production nodes, and the process is getting more complicated if process window aware
models are to be developed in a reasonable time frame, thus there is a need for reliable methods to choose sample plans
which also help reduce cycle time.
In this context, an automated flow is proposed for sample plan creation. Once the illumination and film stack are defined,
all the errors in the input data are fixed and sites are centered. Then, bad sites are excluded. Afterwards, the clean data
are reduced based on geometrical resemblance. Also, an editable database of measurement-reliable and critical structures
are provided, and their percentage in the final sample plan as well as the total number of 1D/2D samples can be
predefined. It has the advantage of eliminating manual selection or filtering techniques, and it provides powerful tools
for customizing the final plan, and the time needed to generate these plans is greatly reduced.
As the process of migration of Optical process correction (OPC) recipes continues to go from sparse to dense
computations, a run time effectiveness issue persists to remain for huge structures that exist in some metal and active
layers designs. Even for 45 and 32 nm technology nodes, some polygons might be several microns in width consuming a
vast amount of simulation time and order of magnitudes more than sparse simulations to converge. Practically, this
problem is pronounced, which is usually the case, when the design comprises these huge structures and other small
critical ones that needs many iterations and careful tuning to converge. And thus, a considerable amount of run time will
be wasted while applying these sophisticated recipes on big structures that could originally converge within few
iterations using a simple recipe.
In this context, a convergence-based dense OPC recipe is proposed to deal with designs that have both types of
structures. The basic idea is to check convergence prior starting next iteration and skip the rest of the iterations if the
whole simulated frame has converged within a predefined tolerance. Also, a reasonable way to define tolerances is
explored.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.