Mask 3D (M3D) effects distort diffraction amplitudes from EUV masks. Electromagnetic (EM) simulations are used to rigorously calculate the distorted diffraction amplitudes. However, EM simulations are highly time consuming for OPC applications. The distorted diffraction amplitude can be characterized by M3D parameters. We develop a convolutional neural network (CNN) model which predicts M3D parameters very fast from input mask patterns. In this work, we train CNN using test mask data with various characteristics of metal layers. The accuracy of the CNN is good for the test mask data. However, when we use new mask data that mimic device patterns, the accuracy of the CNN is worsened. Starting from the CNN pre-trained by the test mask data, we improve the accuracy of the CNN by additional training using larger dataset including both the test mask data and the new mask data. The accuracy of the CNN is slightly improved by the fine tuning.
BackgroundMask three-dimensional (3D) effects distort diffraction amplitudes from extreme ultraviolet masks. In a previous work, we developed a convolutional neural network (CNN) that predicted distorted diffraction amplitudes very fast from input mask patterns.AimIn this work, we reduce both the time for preparing the training data and the time for image intensity integration.ApproachWe reduce the time for preparing the training data by applying weakly guiding approximation to 3D waveguide model. The model solves Helmholtz type coupled vector wave equations of two polarizations. The approximation decomposes the coupled vector wave equations into two scalar wave equations, reducing the computation time to solve the equations. Regarding the image intensity integration, Abbe’s theory has been used in electromagnetic (EM) simulations. The transmission cross coefficient (TCC) formula is known to be faster than Abbe’s theory, but the TCC formula cannot be applied to source position dependent diffraction amplitudes in EM simulations. We derive source position dependent TCC (STCC) formula starting from Abbe’s theory to reduce the image intensity integration time.ResultsWeakly guiding approximation reduces the time of EM simulation by a factor of 5, from 50 to 10 min. STCC formula reduces the time of the image intensity integration by a factor of 140, from 10 to 0.07 s.ConclusionsThe total time of the image intensity prediction for 512 nm×512 nm area on a wafer is ∼0.1 s. A remaining issue is the accuracy of the CNN.
In our previous works, a convolutional neural network was developed which predicted diffraction amplitudes from extreme ultraviolet masks very fast. In this work, we reduce both the time for preparing the training data and the time for image intensity integration. We reduce the time for preparing the training data by applying weakly guiding approximation to 3D waveguide model. The model solves Helmholtz type coupled vector wave equations of two polarizations. The approximation decomposes the coupled vector wave equations into two scalar wave equations, reducing the computation time to solve the equations. Regarding the image intensity integration, Abbe’s theory has been used in electromagnetic simulations. Transmission cross coefficient (TCC) formula is known to be faster than Abbe’s theory, but TCC formula cannot be applied to source position dependent diffraction amplitudes in electromagnetic simulations. We derive source position dependent TCC formula starting from Abbe’s theory to accelerate the image intensity integration.
BackgroundMask 3D (M3D) effects distort diffraction amplitudes from extreme ultraviolet masks. In our previous work, we developed a convolutional neural network (CNN) that very quickly predicted the distorted diffraction amplitudes from input mask patterns. The mask patterns were restricted to Manhattan patterns.AimWe verify the potentials and the limitations of CNN using imec 3 nm node (iN3) mask patterns.ApproachWe apply the same CNN architecture in the previous work to mask patterns, which mimic iN3 logic metal or via layers. In addition, to study more general mask patterns, we apply the architecture to iN3 metal/via patterns with optical proximity correction (OPC) and curvilinear via patterns. In total, we train five different CNNs: metal patterns w/wo OPC, via patterns w/wo OPC, and curvilinear via patterns. After the training, we validate each CNN using validation data with the above five different characteristics.ResultsWhen we use the training and validation data with the same characteristics, the validation loss becomes very small. Our CNN architecture is flexible enough to be applied to iN3 metal and via layers. The architecture has the capability to recognize curvilinear mask patterns. On the other hand, using the training and validation data with different characteristics will lead to large validation loss. The selection of training data is very important for obtaining high accuracy. We examine the impact of M3D effects on iN3 metal layers. A large difference is observed in the tip to tip (T2T) critical dimension calculated by the thin mask model and thick mask model. This is due to the mask shadowing effect at T2T slits.ConclusionsThe selection of training data is very important for obtaining high accuracy. Our test results suggest that layer specific CNN could be constructed, but further development of CNN architecture could be required.
Mask 3D effects distort diffraction amplitudes from EUV masks. In the previous work, we developed a CNN which predicted the distorted diffraction amplitudes very fast from input mask patterns. The mask patterns in the work were restricted to Manhattan patterns. In general, the accuracy of neural networks depends on their training data. The CNN trained by Manhattan patterns cannot be used to general mask patterns. However, our CNN architecture contains 70 M parameters, and the architecture itself could be applied to general mask patterns. In this work, we apply the same CNN architecture to mask patterns which mimic iN3 logic metal or via layers. Additionally, to study more general mask patterns, we train CNNs using iN3 metal/via patterns with OPC and curvilinear via patterns. In total we train five different CNNs: metal patterns w/wo OPC, via patterns w/wo OPC, and curvilinear via patterns. After the training, we validate each CNN using validation data with the above five different characteristics. When we use the training and validation data with same characteristics, the validation loss becomes very small. Our CNN architecture is flexible enough to be applied to iN3 metal and via layers. On the other hand, using the training and validation data with different characteristics will lead to large validation loss. The selection of training data is very important to obtain high accuracy. We examine the impact of mask 3D effects on iN3 metal layer. Large difference is observed in T2T CD calculated by thin mask model and thick mask model. This is due to the mask shadowing effect at T2T slits. Our CNN successfully predicts T2T CD of thick mask model, which is sensitive to the mask 3D effect.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.